00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 257 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.123 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.123 The recommended git tool is: git 00:00:00.124 using credential 00000000-0000-0000-0000-000000000002 00:00:00.125 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.249 Using shallow fetch with depth 1 00:00:00.249 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.249 > git --version # timeout=10 00:00:00.283 > git --version # 'git version 2.39.2' 00:00:00.283 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.803 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.815 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.825 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:05.825 > git config core.sparsecheckout # timeout=10 00:00:05.835 > git read-tree -mu HEAD # timeout=10 00:00:05.851 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.870 Commit message: "inventory/dev: add missing long names" 00:00:05.870 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:05.982 [Pipeline] Start of Pipeline 00:00:05.992 [Pipeline] library 00:00:05.993 Loading library shm_lib@master 00:00:05.993 Library shm_lib@master is cached. Copying from home. 00:00:06.006 [Pipeline] node 00:00:21.008 Still waiting to schedule task 00:00:21.009 Waiting for next available executor on ‘vagrant-vm-host’ 00:13:18.426 Running on VM-host-SM4 in /var/jenkins/workspace/freebsd-vg-autotest_3 00:13:18.428 [Pipeline] { 00:13:18.441 [Pipeline] catchError 00:13:18.442 [Pipeline] { 00:13:18.456 [Pipeline] wrap 00:13:18.466 [Pipeline] { 00:13:18.472 [Pipeline] stage 00:13:18.473 [Pipeline] { (Prologue) 00:13:18.487 [Pipeline] echo 00:13:18.488 Node: VM-host-SM4 00:13:18.491 [Pipeline] cleanWs 00:13:18.499 [WS-CLEANUP] Deleting project workspace... 00:13:18.499 [WS-CLEANUP] Deferred wipeout is used... 00:13:18.506 [WS-CLEANUP] done 00:13:18.650 [Pipeline] setCustomBuildProperty 00:13:18.716 [Pipeline] nodesByLabel 00:13:18.717 Found a total of 1 nodes with the 'sorcerer' label 00:13:18.725 [Pipeline] httpRequest 00:13:18.730 HttpMethod: GET 00:13:18.730 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:13:18.732 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:13:18.734 Response Code: HTTP/1.1 200 OK 00:13:18.734 Success: Status code 200 is in the accepted range: 200,404 00:13:18.734 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_3/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:13:18.873 [Pipeline] sh 00:13:19.150 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:13:19.168 [Pipeline] httpRequest 00:13:19.172 HttpMethod: GET 00:13:19.173 URL: http://10.211.164.101/packages/spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:13:19.173 Sending request to url: http://10.211.164.101/packages/spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:13:19.174 Response Code: HTTP/1.1 200 OK 00:13:19.175 Success: Status code 200 is in the accepted range: 200,404 00:13:19.176 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_3/spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:13:21.335 [Pipeline] sh 00:13:21.612 + tar --no-same-owner -xf spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:13:24.906 [Pipeline] sh 00:13:25.182 + git -C spdk log --oneline -n5 00:13:25.182 cc94f3031 raid1: handle read errors 00:13:25.182 6e950b24b raid1: move function to avoid forward declaration later 00:13:25.182 d6aa653d2 raid1: remove common base bdev io completion function 00:13:25.182 b0b0889ef raid1: handle write errors 00:13:25.182 9820a9496 raid: add a default completion status to raid_bdev_io 00:13:25.198 [Pipeline] sh 00:13:25.474 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/84/23184/6 00:13:26.404 From https://review.spdk.io/gerrit/spdk/dpdk 00:13:26.404 * branch refs/changes/84/23184/6 -> FETCH_HEAD 00:13:26.416 [Pipeline] sh 00:13:26.761 + git -C spdk/dpdk checkout FETCH_HEAD 00:13:27.327 Previous HEAD position was db99adb13f kernel/freebsd: fix module build on FreeBSD 14 00:13:27.327 HEAD is now at d0dd711a38 crypto: increase RTE_CRYPTO_MAX_DEVS to accomodate QAT SYM and ASYM VFs 00:13:27.347 [Pipeline] writeFile 00:13:27.367 [Pipeline] sh 00:13:27.648 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:13:27.661 [Pipeline] sh 00:13:27.944 + cat autorun-spdk.conf 00:13:27.944 SPDK_TEST_UNITTEST=1 00:13:27.944 SPDK_RUN_VALGRIND=0 00:13:27.944 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:27.944 SPDK_TEST_NVME=1 00:13:27.944 SPDK_TEST_BLOCKDEV=1 00:13:27.944 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:27.951 RUN_NIGHTLY= 00:13:27.953 [Pipeline] } 00:13:27.974 [Pipeline] // stage 00:13:27.991 [Pipeline] stage 00:13:27.993 [Pipeline] { (Run VM) 00:13:28.008 [Pipeline] sh 00:13:28.290 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:13:28.290 + echo 'Start stage prepare_nvme.sh' 00:13:28.290 Start stage prepare_nvme.sh 00:13:28.290 + [[ -n 9 ]] 00:13:28.290 + disk_prefix=ex9 00:13:28.290 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_3 ]] 00:13:28.290 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_3/autorun-spdk.conf ]] 00:13:28.290 + source /var/jenkins/workspace/freebsd-vg-autotest_3/autorun-spdk.conf 00:13:28.290 ++ SPDK_TEST_UNITTEST=1 00:13:28.290 ++ SPDK_RUN_VALGRIND=0 00:13:28.290 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:28.290 ++ SPDK_TEST_NVME=1 00:13:28.290 ++ SPDK_TEST_BLOCKDEV=1 00:13:28.290 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:28.290 ++ RUN_NIGHTLY= 00:13:28.290 + cd /var/jenkins/workspace/freebsd-vg-autotest_3 00:13:28.290 + nvme_files=() 00:13:28.290 + declare -A nvme_files 00:13:28.290 + backend_dir=/var/lib/libvirt/images/backends 00:13:28.290 + nvme_files['nvme.img']=5G 00:13:28.290 + nvme_files['nvme-cmb.img']=5G 00:13:28.290 + nvme_files['nvme-multi0.img']=4G 00:13:28.290 + nvme_files['nvme-multi1.img']=4G 00:13:28.290 + nvme_files['nvme-multi2.img']=4G 00:13:28.290 + nvme_files['nvme-openstack.img']=8G 00:13:28.290 + nvme_files['nvme-zns.img']=5G 00:13:28.290 + (( SPDK_TEST_NVME_PMR == 1 )) 00:13:28.290 + (( SPDK_TEST_FTL == 1 )) 00:13:28.290 + (( SPDK_TEST_NVME_FDP == 1 )) 00:13:28.290 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:13:28.290 + for nvme in "${!nvme_files[@]}" 00:13:28.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:13:28.290 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:13:28.290 + for nvme in "${!nvme_files[@]}" 00:13:28.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:13:28.290 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:13:28.290 + for nvme in "${!nvme_files[@]}" 00:13:28.290 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:13:28.291 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:13:28.291 + for nvme in "${!nvme_files[@]}" 00:13:28.291 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:13:28.291 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:13:28.291 + for nvme in "${!nvme_files[@]}" 00:13:28.291 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:13:28.291 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:13:28.291 + for nvme in "${!nvme_files[@]}" 00:13:28.291 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:13:28.549 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:13:28.549 + for nvme in "${!nvme_files[@]}" 00:13:28.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:13:29.486 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:13:29.486 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:13:29.486 + echo 'End stage prepare_nvme.sh' 00:13:29.486 End stage prepare_nvme.sh 00:13:29.499 [Pipeline] sh 00:13:29.778 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:13:29.778 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f freebsd13 00:13:29.778 00:13:29.778 DIR=/var/jenkins/workspace/freebsd-vg-autotest_3/spdk/scripts/vagrant 00:13:29.778 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_3/spdk 00:13:29.778 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_3 00:13:29.778 HELP=0 00:13:29.778 DRY_RUN=0 00:13:29.778 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:13:29.778 NVME_DISKS_TYPE=nvme, 00:13:29.778 NVME_AUTO_CREATE=0 00:13:29.778 NVME_DISKS_NAMESPACES=, 00:13:29.778 NVME_CMB=, 00:13:29.778 NVME_PMR=, 00:13:29.778 NVME_ZNS=, 00:13:29.778 NVME_MS=, 00:13:29.778 NVME_FDP=, 00:13:29.778 SPDK_VAGRANT_DISTRO=freebsd13 00:13:29.778 SPDK_VAGRANT_VMCPU=10 00:13:29.778 SPDK_VAGRANT_VMRAM=12288 00:13:29.778 SPDK_VAGRANT_PROVIDER=libvirt 00:13:29.778 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:13:29.778 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:13:29.778 SPDK_OPENSTACK_NETWORK=0 00:13:29.778 VAGRANT_PACKAGE_BOX=0 00:13:29.778 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:13:29.778 FORCE_DISTRO=true 00:13:29.778 VAGRANT_BOX_VERSION= 00:13:29.778 EXTRA_VAGRANTFILES= 00:13:29.778 NIC_MODEL=e1000 00:13:29.778 00:13:29.778 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt' 00:13:29.778 /var/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest_3 00:13:33.062 Bringing machine 'default' up with 'libvirt' provider... 00:13:33.677 ==> default: Creating image (snapshot of base box volume). 00:13:33.677 ==> default: Creating domain with the following settings... 00:13:33.677 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1715844386_164cde73c403b13e292b 00:13:33.677 ==> default: -- Domain type: kvm 00:13:33.677 ==> default: -- Cpus: 10 00:13:33.677 ==> default: -- Feature: acpi 00:13:33.677 ==> default: -- Feature: apic 00:13:33.677 ==> default: -- Feature: pae 00:13:33.677 ==> default: -- Memory: 12288M 00:13:33.677 ==> default: -- Memory Backing: hugepages: 00:13:33.677 ==> default: -- Management MAC: 00:13:33.677 ==> default: -- Loader: 00:13:33.677 ==> default: -- Nvram: 00:13:33.677 ==> default: -- Base box: spdk/freebsd13 00:13:33.677 ==> default: -- Storage pool: default 00:13:33.677 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1715844386_164cde73c403b13e292b.img (32G) 00:13:33.677 ==> default: -- Volume Cache: default 00:13:33.677 ==> default: -- Kernel: 00:13:33.677 ==> default: -- Initrd: 00:13:33.677 ==> default: -- Graphics Type: vnc 00:13:33.677 ==> default: -- Graphics Port: -1 00:13:33.677 ==> default: -- Graphics IP: 127.0.0.1 00:13:33.677 ==> default: -- Graphics Password: Not defined 00:13:33.677 ==> default: -- Video Type: cirrus 00:13:33.677 ==> default: -- Video VRAM: 9216 00:13:33.677 ==> default: -- Sound Type: 00:13:33.677 ==> default: -- Keymap: en-us 00:13:33.677 ==> default: -- TPM Path: 00:13:33.677 ==> default: -- INPUT: type=mouse, bus=ps2 00:13:33.677 ==> default: -- Command line args: 00:13:33.677 ==> default: -> value=-device, 00:13:33.677 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:13:33.677 ==> default: -> value=-drive, 00:13:33.677 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:13:33.677 ==> default: -> value=-device, 00:13:33.677 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:33.935 ==> default: Creating shared folders metadata... 00:13:33.935 ==> default: Starting domain. 00:13:35.837 ==> default: Waiting for domain to get an IP address... 00:13:57.801 ==> default: Waiting for SSH to become available... 00:14:12.717 ==> default: Configuring and enabling network interfaces... 00:14:14.697 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:27.036 ==> default: Mounting SSHFS shared folder... 00:14:27.036 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:14:27.036 ==> default: Checking Mount.. 00:14:27.036 ==> default: Folder Successfully Mounted! 00:14:27.036 ==> default: Running provisioner: file... 00:14:27.295 default: ~/.gitconfig => .gitconfig 00:14:27.861 00:14:27.861 SUCCESS! 00:14:27.861 00:14:27.861 cd to /var/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt and type "vagrant ssh" to use. 00:14:27.861 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:27.861 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt" to destroy all trace of vm. 00:14:27.861 00:14:27.869 [Pipeline] } 00:14:27.887 [Pipeline] // stage 00:14:27.896 [Pipeline] dir 00:14:27.897 Running in /var/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt 00:14:27.898 [Pipeline] { 00:14:27.914 [Pipeline] catchError 00:14:27.916 [Pipeline] { 00:14:27.930 [Pipeline] sh 00:14:28.209 + vagrant ssh-config --host vagrant 00:14:28.209 + sed -ne /^Host/,$p 00:14:28.209 + tee ssh_conf 00:14:32.409 Host vagrant 00:14:32.409 HostName 192.168.121.39 00:14:32.409 User vagrant 00:14:32.409 Port 22 00:14:32.409 UserKnownHostsFile /dev/null 00:14:32.409 StrictHostKeyChecking no 00:14:32.409 PasswordAuthentication no 00:14:32.409 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:14:32.409 IdentitiesOnly yes 00:14:32.409 LogLevel FATAL 00:14:32.409 ForwardAgent yes 00:14:32.409 ForwardX11 yes 00:14:32.409 00:14:32.422 [Pipeline] withEnv 00:14:32.424 [Pipeline] { 00:14:32.439 [Pipeline] sh 00:14:32.718 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:14:32.718 source /etc/os-release 00:14:32.718 [[ -e /image.version ]] && img=$(< /image.version) 00:14:32.718 # Minimal, systemd-like check. 00:14:32.718 if [[ -e /.dockerenv ]]; then 00:14:32.718 # Clear garbage from the node's name: 00:14:32.718 # agt-er_autotest_547-896 -> autotest_547-896 00:14:32.718 # $HOSTNAME is the actual container id 00:14:32.718 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:14:32.718 if mountpoint -q /etc/hostname; then 00:14:32.718 # We can assume this is a mount from a host where container is running, 00:14:32.718 # so fetch its hostname to easily identify the target swarm worker. 00:14:32.718 container="$(< /etc/hostname) ($agent)" 00:14:32.718 else 00:14:32.718 # Fallback 00:14:32.718 container=$agent 00:14:32.718 fi 00:14:32.718 fi 00:14:32.718 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:14:32.718 00:14:32.729 [Pipeline] } 00:14:32.750 [Pipeline] // withEnv 00:14:32.758 [Pipeline] setCustomBuildProperty 00:14:32.775 [Pipeline] stage 00:14:32.777 [Pipeline] { (Tests) 00:14:32.800 [Pipeline] sh 00:14:33.078 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:14:33.094 [Pipeline] timeout 00:14:33.094 Timeout set to expire in 1 hr 0 min 00:14:33.097 [Pipeline] { 00:14:33.114 [Pipeline] sh 00:14:33.392 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:14:33.986 HEAD is now at cc94f3031 raid1: handle read errors 00:14:33.999 [Pipeline] sh 00:14:34.278 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:14:34.293 [Pipeline] sh 00:14:34.572 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:14:34.588 [Pipeline] sh 00:14:34.865 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang ./autoruner.sh spdk_repo 00:14:34.865 ++ readlink -f spdk_repo 00:14:34.865 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:14:34.865 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:14:34.865 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:14:34.865 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:14:34.865 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:14:34.865 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:14:34.865 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:14:34.865 + cd /usr/home/vagrant/spdk_repo 00:14:34.865 + source /etc/os-release 00:14:34.865 ++ NAME=FreeBSD 00:14:34.865 ++ VERSION=13.2-RELEASE 00:14:34.865 ++ VERSION_ID=13.2 00:14:34.865 ++ ID=freebsd 00:14:34.865 ++ ANSI_COLOR='0;31' 00:14:34.865 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:14:34.865 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:14:34.865 ++ HOME_URL=https://FreeBSD.org/ 00:14:34.865 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:14:34.865 + uname -a 00:14:34.865 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:14:34.865 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:35.123 Contigmem (not present) 00:14:35.123 Buffer Size: not set 00:14:35.123 Num Buffers: not set 00:14:35.123 00:14:35.123 00:14:35.123 Type BDF Vendor Device Driver 00:14:35.123 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:14:35.123 + rm -f /tmp/spdk-ld-path 00:14:35.123 + source autorun-spdk.conf 00:14:35.123 ++ SPDK_TEST_UNITTEST=1 00:14:35.123 ++ SPDK_RUN_VALGRIND=0 00:14:35.123 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:35.123 ++ SPDK_TEST_NVME=1 00:14:35.123 ++ SPDK_TEST_BLOCKDEV=1 00:14:35.123 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:35.123 ++ RUN_NIGHTLY= 00:14:35.123 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:14:35.123 + [[ -n '' ]] 00:14:35.123 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:14:35.123 + for M in /var/spdk/build-*-manifest.txt 00:14:35.123 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:14:35.123 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:14:35.123 + for M in /var/spdk/build-*-manifest.txt 00:14:35.123 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:14:35.123 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:14:35.123 ++ uname 00:14:35.123 + [[ FreeBSD == \L\i\n\u\x ]] 00:14:35.123 + dmesg_pid=1268 00:14:35.123 + [[ FreeBSD == FreeBSD ]] 00:14:35.123 + export LC_ALL=C LC_CTYPE=C 00:14:35.123 + LC_ALL=C 00:14:35.123 + tail -F /var/log/messages 00:14:35.123 + LC_CTYPE=C 00:14:35.123 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:35.123 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:35.123 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:14:35.123 + [[ -x /usr/src/fio-static/fio ]] 00:14:35.123 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:14:35.123 + [[ ! -v VFIO_QEMU_BIN ]] 00:14:35.123 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:14:35.123 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:14:35.123 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:14:35.123 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:14:35.123 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:14:35.123 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:14:35.123 Test configuration: 00:14:35.123 SPDK_TEST_UNITTEST=1 00:14:35.123 SPDK_RUN_VALGRIND=0 00:14:35.123 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:35.123 SPDK_TEST_NVME=1 00:14:35.123 SPDK_TEST_BLOCKDEV=1 00:14:35.123 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:35.123 RUN_NIGHTLY= 07:27:28 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.382 07:27:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:35.382 07:27:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.382 07:27:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.382 07:27:28 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:14:35.382 07:27:28 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:14:35.382 07:27:28 -- paths/export.sh@4 -- $ export PATH 00:14:35.382 07:27:28 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:14:35.382 07:27:28 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:14:35.382 07:27:28 -- common/autobuild_common.sh@437 -- $ date +%s 00:14:35.382 07:27:28 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715844448.XXXXXX 00:14:35.382 07:27:28 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715844448.XXXXXX.Xq0I8IKP 00:14:35.382 07:27:28 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:14:35.382 07:27:28 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:14:35.382 07:27:28 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:14:35.382 07:27:28 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:35.382 07:27:28 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:35.382 07:27:28 -- common/autobuild_common.sh@453 -- $ get_config_params 00:14:35.382 07:27:28 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:14:35.382 07:27:28 -- common/autotest_common.sh@10 -- $ set +x 00:14:35.382 07:27:28 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:14:35.382 07:27:28 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:14:35.382 07:27:28 -- pm/common@17 -- $ local monitor 00:14:35.382 07:27:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:35.382 07:27:28 -- pm/common@25 -- $ sleep 1 00:14:35.382 07:27:28 -- pm/common@21 -- $ date +%s 00:14:35.382 07:27:28 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715844448 00:14:35.382 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715844448_collect-vmstat.pm.log 00:14:36.759 07:27:29 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:14:36.759 07:27:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:14:36.759 07:27:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:14:36.759 07:27:29 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:14:36.759 07:27:29 -- spdk/autobuild.sh@16 -- $ date -u 00:14:36.759 Thu May 16 07:27:29 UTC 2024 00:14:36.759 07:27:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:14:36.759 v24.05-pre-687-gcc94f3031 00:14:36.759 07:27:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:14:36.759 07:27:29 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:14:36.759 07:27:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:14:36.759 07:27:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:36.759 07:27:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:36.759 07:27:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:36.759 07:27:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:36.759 07:27:29 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:14:36.759 07:27:29 -- spdk/autobuild.sh@58 -- $ unittest_build 00:14:36.759 07:27:29 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:14:36.759 07:27:29 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:14:36.759 07:27:29 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:14:36.759 07:27:29 -- common/autotest_common.sh@10 -- $ set +x 00:14:36.759 ************************************ 00:14:36.759 START TEST unittest_build 00:14:36.759 ************************************ 00:14:36.759 07:27:29 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:14:36.759 07:27:29 unittest_build -- common/autobuild_common.sh@404 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:14:37.325 Notice: Vhost, rte_vhost library, virtio, and fuse 00:14:37.325 are only supported on Linux. Turning off default feature. 00:14:37.325 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:37.325 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:37.891 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:14:37.891 Using 'verbs' RDMA provider 00:14:48.424 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:14:58.433 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:14:58.691 Creating mk/config.mk...done. 00:14:58.691 Creating mk/cc.flags.mk...done. 00:14:58.691 Type 'gmake' to build. 00:14:58.691 07:27:52 unittest_build -- common/autobuild_common.sh@405 -- $ gmake -j10 00:14:58.949 gmake[1]: Nothing to be done for 'all'. 00:15:02.231 ps: stdin: not a terminal 00:15:07.501 The Meson build system 00:15:07.501 Version: 1.3.1 00:15:07.501 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:15:07.501 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:07.501 Build type: native build 00:15:07.501 Program cat found: YES (/bin/cat) 00:15:07.501 Project name: DPDK 00:15:07.501 Project version: 24.03.0 00:15:07.501 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:15:07.501 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:15:07.501 Host machine cpu family: x86_64 00:15:07.501 Host machine cpu: x86_64 00:15:07.501 Message: ## Building in Developer Mode ## 00:15:07.501 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:15:07.501 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:07.501 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:07.501 Program python3 found: YES (/usr/local/bin/python3.9) 00:15:07.501 Program cat found: YES (/bin/cat) 00:15:07.501 Compiler for C supports arguments -march=native: YES 00:15:07.501 Checking for size of "void *" : 8 00:15:07.501 Checking for size of "void *" : 8 (cached) 00:15:07.501 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:07.501 Library m found: YES 00:15:07.501 Library numa found: NO 00:15:07.501 Library fdt found: NO 00:15:07.501 Library execinfo found: YES 00:15:07.501 Has header "execinfo.h" : YES 00:15:07.501 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:15:07.501 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:07.501 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:07.501 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:07.501 Run-time dependency openssl found: YES 3.0.13 00:15:07.501 Run-time dependency libpcap found: NO (tried pkgconfig) 00:15:07.501 Library pcap found: YES 00:15:07.501 Has header "pcap.h" with dependency -lpcap: YES 00:15:07.501 Compiler for C supports arguments -Wcast-qual: YES 00:15:07.501 Compiler for C supports arguments -Wdeprecated: YES 00:15:07.501 Compiler for C supports arguments -Wformat: YES 00:15:07.501 Compiler for C supports arguments -Wformat-nonliteral: YES 00:15:07.501 Compiler for C supports arguments -Wformat-security: YES 00:15:07.501 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:07.501 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:07.501 Compiler for C supports arguments -Wnested-externs: YES 00:15:07.501 Compiler for C supports arguments -Wold-style-definition: YES 00:15:07.501 Compiler for C supports arguments -Wpointer-arith: YES 00:15:07.501 Compiler for C supports arguments -Wsign-compare: YES 00:15:07.501 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:07.501 Compiler for C supports arguments -Wundef: YES 00:15:07.501 Compiler for C supports arguments -Wwrite-strings: YES 00:15:07.501 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:07.501 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:15:07.501 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:07.501 Compiler for C supports arguments -mavx512f: YES 00:15:07.501 Checking if "AVX512 checking" compiles: YES 00:15:07.501 Fetching value of define "__SSE4_2__" : 1 00:15:07.501 Fetching value of define "__AES__" : 1 00:15:07.501 Fetching value of define "__AVX__" : 1 00:15:07.501 Fetching value of define "__AVX2__" : 1 00:15:07.501 Fetching value of define "__AVX512BW__" : 1 00:15:07.501 Fetching value of define "__AVX512CD__" : 1 00:15:07.502 Fetching value of define "__AVX512DQ__" : 1 00:15:07.502 Fetching value of define "__AVX512F__" : 1 00:15:07.502 Fetching value of define "__AVX512VL__" : 1 00:15:07.502 Fetching value of define "__PCLMUL__" : 1 00:15:07.502 Fetching value of define "__RDRND__" : 1 00:15:07.502 Fetching value of define "__RDSEED__" : 1 00:15:07.502 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:15:07.502 Fetching value of define "__znver1__" : (undefined) 00:15:07.502 Fetching value of define "__znver2__" : (undefined) 00:15:07.502 Fetching value of define "__znver3__" : (undefined) 00:15:07.502 Fetching value of define "__znver4__" : (undefined) 00:15:07.502 Compiler for C supports arguments -Wno-format-truncation: NO 00:15:07.502 Message: lib/log: Defining dependency "log" 00:15:07.502 Message: lib/kvargs: Defining dependency "kvargs" 00:15:07.502 Message: lib/telemetry: Defining dependency "telemetry" 00:15:07.502 Checking if "Detect argument count for CPU_OR" compiles: YES 00:15:07.502 Checking for function "getentropy" : YES 00:15:07.502 Message: lib/eal: Defining dependency "eal" 00:15:07.502 Message: lib/ring: Defining dependency "ring" 00:15:07.502 Message: lib/rcu: Defining dependency "rcu" 00:15:07.502 Message: lib/mempool: Defining dependency "mempool" 00:15:07.502 Message: lib/mbuf: Defining dependency "mbuf" 00:15:07.502 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:07.502 Fetching value of define "__AVX512F__" : 1 (cached) 00:15:07.502 Fetching value of define "__AVX512BW__" : 1 (cached) 00:15:07.502 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:15:07.502 Fetching value of define "__AVX512VL__" : 1 (cached) 00:15:07.502 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:15:07.502 Compiler for C supports arguments -mpclmul: YES 00:15:07.502 Compiler for C supports arguments -maes: YES 00:15:07.502 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:07.502 Compiler for C supports arguments -mavx512bw: YES 00:15:07.502 Compiler for C supports arguments -mavx512dq: YES 00:15:07.502 Compiler for C supports arguments -mavx512vl: YES 00:15:07.502 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:07.502 Compiler for C supports arguments -mavx2: YES 00:15:07.502 Compiler for C supports arguments -mavx: YES 00:15:07.502 Message: lib/net: Defining dependency "net" 00:15:07.502 Message: lib/meter: Defining dependency "meter" 00:15:07.502 Message: lib/ethdev: Defining dependency "ethdev" 00:15:07.502 Message: lib/pci: Defining dependency "pci" 00:15:07.502 Message: lib/cmdline: Defining dependency "cmdline" 00:15:07.502 Message: lib/hash: Defining dependency "hash" 00:15:07.502 Message: lib/timer: Defining dependency "timer" 00:15:07.502 Message: lib/compressdev: Defining dependency "compressdev" 00:15:07.502 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:07.502 Message: lib/dmadev: Defining dependency "dmadev" 00:15:07.502 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:07.502 Message: lib/reorder: Defining dependency "reorder" 00:15:07.502 Message: lib/security: Defining dependency "security" 00:15:07.502 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:15:07.502 Message: lib/stack: Defining dependency "stack" 00:15:07.502 Has header "linux/userfaultfd.h" : NO 00:15:07.502 Has header "linux/vduse.h" : NO 00:15:07.502 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:15:07.502 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:07.502 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:07.502 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:07.502 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:07.502 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:07.502 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:07.502 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:15:07.502 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:07.502 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:07.502 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:07.502 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:07.502 Configuring doxy-api-html.conf using configuration 00:15:07.502 Configuring doxy-api-man.conf using configuration 00:15:07.502 Program mandb found: NO 00:15:07.502 Program sphinx-build found: NO 00:15:07.502 Configuring rte_build_config.h using configuration 00:15:07.502 Message: 00:15:07.502 ================= 00:15:07.502 Applications Enabled 00:15:07.502 ================= 00:15:07.502 00:15:07.502 apps: 00:15:07.502 00:15:07.502 00:15:07.502 Message: 00:15:07.502 ================= 00:15:07.502 Libraries Enabled 00:15:07.502 ================= 00:15:07.502 00:15:07.502 libs: 00:15:07.502 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:07.502 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:07.502 cryptodev, dmadev, reorder, security, stack, 00:15:07.502 00:15:07.502 Message: 00:15:07.502 =============== 00:15:07.502 Drivers Enabled 00:15:07.502 =============== 00:15:07.502 00:15:07.502 common: 00:15:07.502 00:15:07.502 bus: 00:15:07.502 pci, vdev, 00:15:07.502 mempool: 00:15:07.502 ring, 00:15:07.502 dma: 00:15:07.502 00:15:07.502 net: 00:15:07.502 00:15:07.502 crypto: 00:15:07.502 00:15:07.502 compress: 00:15:07.502 00:15:07.502 00:15:07.502 Message: 00:15:07.502 ================= 00:15:07.502 Content Skipped 00:15:07.502 ================= 00:15:07.502 00:15:07.502 apps: 00:15:07.502 dumpcap: explicitly disabled via build config 00:15:07.502 graph: explicitly disabled via build config 00:15:07.502 pdump: explicitly disabled via build config 00:15:07.502 proc-info: explicitly disabled via build config 00:15:07.502 test-acl: explicitly disabled via build config 00:15:07.502 test-bbdev: explicitly disabled via build config 00:15:07.502 test-cmdline: explicitly disabled via build config 00:15:07.502 test-compress-perf: explicitly disabled via build config 00:15:07.502 test-crypto-perf: explicitly disabled via build config 00:15:07.502 test-dma-perf: explicitly disabled via build config 00:15:07.502 test-eventdev: explicitly disabled via build config 00:15:07.502 test-fib: explicitly disabled via build config 00:15:07.502 test-flow-perf: explicitly disabled via build config 00:15:07.502 test-gpudev: explicitly disabled via build config 00:15:07.502 test-mldev: explicitly disabled via build config 00:15:07.502 test-pipeline: explicitly disabled via build config 00:15:07.502 test-pmd: explicitly disabled via build config 00:15:07.502 test-regex: explicitly disabled via build config 00:15:07.502 test-sad: explicitly disabled via build config 00:15:07.502 test-security-perf: explicitly disabled via build config 00:15:07.502 00:15:07.502 libs: 00:15:07.502 argparse: explicitly disabled via build config 00:15:07.502 metrics: explicitly disabled via build config 00:15:07.502 acl: explicitly disabled via build config 00:15:07.502 bbdev: explicitly disabled via build config 00:15:07.502 bitratestats: explicitly disabled via build config 00:15:07.502 bpf: explicitly disabled via build config 00:15:07.502 cfgfile: explicitly disabled via build config 00:15:07.502 distributor: explicitly disabled via build config 00:15:07.502 efd: explicitly disabled via build config 00:15:07.502 eventdev: explicitly disabled via build config 00:15:07.502 dispatcher: explicitly disabled via build config 00:15:07.502 gpudev: explicitly disabled via build config 00:15:07.502 gro: explicitly disabled via build config 00:15:07.502 gso: explicitly disabled via build config 00:15:07.502 ip_frag: explicitly disabled via build config 00:15:07.502 jobstats: explicitly disabled via build config 00:15:07.502 latencystats: explicitly disabled via build config 00:15:07.502 lpm: explicitly disabled via build config 00:15:07.502 member: explicitly disabled via build config 00:15:07.502 pcapng: explicitly disabled via build config 00:15:07.502 power: only supported on Linux 00:15:07.502 rawdev: explicitly disabled via build config 00:15:07.502 regexdev: explicitly disabled via build config 00:15:07.502 mldev: explicitly disabled via build config 00:15:07.502 rib: explicitly disabled via build config 00:15:07.502 sched: explicitly disabled via build config 00:15:07.502 vhost: only supported on Linux 00:15:07.502 ipsec: explicitly disabled via build config 00:15:07.502 pdcp: explicitly disabled via build config 00:15:07.502 fib: explicitly disabled via build config 00:15:07.502 port: explicitly disabled via build config 00:15:07.502 pdump: explicitly disabled via build config 00:15:07.502 table: explicitly disabled via build config 00:15:07.502 pipeline: explicitly disabled via build config 00:15:07.502 graph: explicitly disabled via build config 00:15:07.502 node: explicitly disabled via build config 00:15:07.502 00:15:07.502 drivers: 00:15:07.502 common/cpt: not in enabled drivers build config 00:15:07.502 common/dpaax: not in enabled drivers build config 00:15:07.502 common/iavf: not in enabled drivers build config 00:15:07.502 common/idpf: not in enabled drivers build config 00:15:07.502 common/ionic: not in enabled drivers build config 00:15:07.502 common/mvep: not in enabled drivers build config 00:15:07.502 common/octeontx: not in enabled drivers build config 00:15:07.502 bus/auxiliary: not in enabled drivers build config 00:15:07.502 bus/cdx: not in enabled drivers build config 00:15:07.502 bus/dpaa: not in enabled drivers build config 00:15:07.502 bus/fslmc: not in enabled drivers build config 00:15:07.502 bus/ifpga: not in enabled drivers build config 00:15:07.502 bus/platform: not in enabled drivers build config 00:15:07.502 bus/uacce: not in enabled drivers build config 00:15:07.502 bus/vmbus: not in enabled drivers build config 00:15:07.502 common/cnxk: not in enabled drivers build config 00:15:07.502 common/mlx5: not in enabled drivers build config 00:15:07.502 common/nfp: not in enabled drivers build config 00:15:07.502 common/nitrox: not in enabled drivers build config 00:15:07.502 common/qat: not in enabled drivers build config 00:15:07.502 common/sfc_efx: not in enabled drivers build config 00:15:07.502 mempool/bucket: not in enabled drivers build config 00:15:07.502 mempool/cnxk: not in enabled drivers build config 00:15:07.502 mempool/dpaa: not in enabled drivers build config 00:15:07.502 mempool/dpaa2: not in enabled drivers build config 00:15:07.502 mempool/octeontx: not in enabled drivers build config 00:15:07.502 mempool/stack: not in enabled drivers build config 00:15:07.502 dma/cnxk: not in enabled drivers build config 00:15:07.502 dma/dpaa: not in enabled drivers build config 00:15:07.502 dma/dpaa2: not in enabled drivers build config 00:15:07.502 dma/hisilicon: not in enabled drivers build config 00:15:07.502 dma/idxd: not in enabled drivers build config 00:15:07.502 dma/ioat: not in enabled drivers build config 00:15:07.502 dma/skeleton: not in enabled drivers build config 00:15:07.503 net/af_packet: not in enabled drivers build config 00:15:07.503 net/af_xdp: not in enabled drivers build config 00:15:07.503 net/ark: not in enabled drivers build config 00:15:07.503 net/atlantic: not in enabled drivers build config 00:15:07.503 net/avp: not in enabled drivers build config 00:15:07.503 net/axgbe: not in enabled drivers build config 00:15:07.503 net/bnx2x: not in enabled drivers build config 00:15:07.503 net/bnxt: not in enabled drivers build config 00:15:07.503 net/bonding: not in enabled drivers build config 00:15:07.503 net/cnxk: not in enabled drivers build config 00:15:07.503 net/cpfl: not in enabled drivers build config 00:15:07.503 net/cxgbe: not in enabled drivers build config 00:15:07.503 net/dpaa: not in enabled drivers build config 00:15:07.503 net/dpaa2: not in enabled drivers build config 00:15:07.503 net/e1000: not in enabled drivers build config 00:15:07.503 net/ena: not in enabled drivers build config 00:15:07.503 net/enetc: not in enabled drivers build config 00:15:07.503 net/enetfec: not in enabled drivers build config 00:15:07.503 net/enic: not in enabled drivers build config 00:15:07.503 net/failsafe: not in enabled drivers build config 00:15:07.503 net/fm10k: not in enabled drivers build config 00:15:07.503 net/gve: not in enabled drivers build config 00:15:07.503 net/hinic: not in enabled drivers build config 00:15:07.503 net/hns3: not in enabled drivers build config 00:15:07.503 net/i40e: not in enabled drivers build config 00:15:07.503 net/iavf: not in enabled drivers build config 00:15:07.503 net/ice: not in enabled drivers build config 00:15:07.503 net/idpf: not in enabled drivers build config 00:15:07.503 net/igc: not in enabled drivers build config 00:15:07.503 net/ionic: not in enabled drivers build config 00:15:07.503 net/ipn3ke: not in enabled drivers build config 00:15:07.503 net/ixgbe: not in enabled drivers build config 00:15:07.503 net/mana: not in enabled drivers build config 00:15:07.503 net/memif: not in enabled drivers build config 00:15:07.503 net/mlx4: not in enabled drivers build config 00:15:07.503 net/mlx5: not in enabled drivers build config 00:15:07.503 net/mvneta: not in enabled drivers build config 00:15:07.503 net/mvpp2: not in enabled drivers build config 00:15:07.503 net/netvsc: not in enabled drivers build config 00:15:07.503 net/nfb: not in enabled drivers build config 00:15:07.503 net/nfp: not in enabled drivers build config 00:15:07.503 net/ngbe: not in enabled drivers build config 00:15:07.503 net/null: not in enabled drivers build config 00:15:07.503 net/octeontx: not in enabled drivers build config 00:15:07.503 net/octeon_ep: not in enabled drivers build config 00:15:07.503 net/pcap: not in enabled drivers build config 00:15:07.503 net/pfe: not in enabled drivers build config 00:15:07.503 net/qede: not in enabled drivers build config 00:15:07.503 net/ring: not in enabled drivers build config 00:15:07.503 net/sfc: not in enabled drivers build config 00:15:07.503 net/softnic: not in enabled drivers build config 00:15:07.503 net/tap: not in enabled drivers build config 00:15:07.503 net/thunderx: not in enabled drivers build config 00:15:07.503 net/txgbe: not in enabled drivers build config 00:15:07.503 net/vdev_netvsc: not in enabled drivers build config 00:15:07.503 net/vhost: not in enabled drivers build config 00:15:07.503 net/virtio: not in enabled drivers build config 00:15:07.503 net/vmxnet3: not in enabled drivers build config 00:15:07.503 raw/*: missing internal dependency, "rawdev" 00:15:07.503 crypto/armv8: not in enabled drivers build config 00:15:07.503 crypto/bcmfs: not in enabled drivers build config 00:15:07.503 crypto/caam_jr: not in enabled drivers build config 00:15:07.503 crypto/ccp: not in enabled drivers build config 00:15:07.503 crypto/cnxk: not in enabled drivers build config 00:15:07.503 crypto/dpaa_sec: not in enabled drivers build config 00:15:07.503 crypto/dpaa2_sec: not in enabled drivers build config 00:15:07.503 crypto/ipsec_mb: not in enabled drivers build config 00:15:07.503 crypto/mlx5: not in enabled drivers build config 00:15:07.503 crypto/mvsam: not in enabled drivers build config 00:15:07.503 crypto/nitrox: not in enabled drivers build config 00:15:07.503 crypto/null: not in enabled drivers build config 00:15:07.503 crypto/octeontx: not in enabled drivers build config 00:15:07.503 crypto/openssl: not in enabled drivers build config 00:15:07.503 crypto/scheduler: not in enabled drivers build config 00:15:07.503 crypto/uadk: not in enabled drivers build config 00:15:07.503 crypto/virtio: not in enabled drivers build config 00:15:07.503 compress/isal: not in enabled drivers build config 00:15:07.503 compress/mlx5: not in enabled drivers build config 00:15:07.503 compress/nitrox: not in enabled drivers build config 00:15:07.503 compress/octeontx: not in enabled drivers build config 00:15:07.503 compress/zlib: not in enabled drivers build config 00:15:07.503 regex/*: missing internal dependency, "regexdev" 00:15:07.503 ml/*: missing internal dependency, "mldev" 00:15:07.503 vdpa/*: missing internal dependency, "vhost" 00:15:07.503 event/*: missing internal dependency, "eventdev" 00:15:07.503 baseband/*: missing internal dependency, "bbdev" 00:15:07.503 gpu/*: missing internal dependency, "gpudev" 00:15:07.503 00:15:07.503 00:15:07.503 Build targets in project: 84 00:15:07.503 00:15:07.503 DPDK 24.03.0 00:15:07.503 00:15:07.503 User defined options 00:15:07.503 buildtype : debug 00:15:07.503 default_library : static 00:15:07.503 libdir : lib 00:15:07.503 prefix : / 00:15:07.503 c_args : -fPIC -Werror 00:15:07.503 c_link_args : 00:15:07.503 cpu_instruction_set: native 00:15:07.503 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:07.503 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:07.503 enable_docs : false 00:15:07.503 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:15:07.503 enable_kmods : true 00:15:07.503 tests : false 00:15:07.503 00:15:07.503 Found ninja-1.11.1 at /usr/local/bin/ninja 00:15:08.070 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:08.070 [1/239] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:08.070 [2/239] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:08.070 [3/239] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:15:08.070 [4/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:08.070 [5/239] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:08.070 [6/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:08.070 [7/239] Linking static target lib/librte_log.a 00:15:08.329 [8/239] Linking static target lib/librte_kvargs.a 00:15:08.329 [9/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:08.329 [10/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:08.329 [11/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:08.329 [12/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:08.329 [13/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:08.329 [14/239] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:08.329 [15/239] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:08.329 [16/239] Linking static target lib/librte_telemetry.a 00:15:08.329 [17/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:08.329 [18/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:08.588 [19/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:08.589 [20/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:08.589 [21/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:08.589 [22/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:08.589 [23/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:08.589 [24/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:08.589 [25/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:08.848 [26/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:08.848 [27/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:08.848 [28/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:08.848 [29/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:08.848 [30/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:08.848 [31/239] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:08.848 [32/239] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:08.848 [33/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:08.848 [34/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:08.848 [35/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:09.106 [36/239] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:09.106 [37/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:09.106 [38/239] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:09.106 [39/239] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:09.106 [40/239] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:09.106 [41/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:09.106 [42/239] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:09.364 [43/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:09.364 [44/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:09.364 [45/239] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:09.364 [46/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:09.364 [47/239] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:09.364 [48/239] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:09.364 [49/239] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:09.364 [50/239] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:09.364 [51/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:15:09.364 [52/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:09.364 [53/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:09.622 [54/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:09.622 [55/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:09.622 [56/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:09.622 [57/239] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:09.622 [58/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:15:09.622 [59/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:15:09.622 [60/239] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:09.622 [61/239] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:09.622 [62/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:15:09.622 [63/239] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:09.622 [64/239] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:09.622 [65/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:15:09.881 [66/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:15:09.881 [67/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:15:09.881 [68/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:15:09.881 [69/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:15:09.881 [70/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:15:09.881 [71/239] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:15:09.881 [72/239] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:10.139 [73/239] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:10.139 [74/239] Linking static target lib/librte_eal.a 00:15:10.139 [75/239] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:10.139 [76/239] Linking static target lib/librte_ring.a 00:15:10.139 [77/239] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:10.139 [78/239] Linking static target lib/librte_rcu.a 00:15:10.139 [79/239] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:10.139 [80/239] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:10.139 [81/239] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:10.139 [82/239] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:10.139 [83/239] Linking static target lib/librte_mempool.a 00:15:10.406 [84/239] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:10.406 [85/239] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.406 [86/239] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:10.406 [87/239] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.406 [88/239] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.406 [89/239] Linking target lib/librte_log.so.24.1 00:15:10.406 [90/239] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:10.406 [91/239] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:10.406 [92/239] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.406 [93/239] Linking target lib/librte_kvargs.so.24.1 00:15:10.664 [94/239] Linking target lib/librte_telemetry.so.24.1 00:15:10.664 [95/239] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:10.664 [96/239] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:15:10.664 [97/239] Linking static target lib/net/libnet_crc_avx512_lib.a 00:15:10.664 [98/239] Linking static target lib/librte_mbuf.a 00:15:10.664 [99/239] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:10.664 [100/239] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:10.664 [101/239] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:10.664 [102/239] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:10.664 [103/239] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:10.664 [104/239] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:10.664 [105/239] Linking static target lib/librte_net.a 00:15:10.664 [106/239] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:10.664 [107/239] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:10.664 [108/239] Linking static target lib/librte_meter.a 00:15:10.924 [109/239] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.924 [110/239] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.924 [111/239] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:10.924 [112/239] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:10.924 [113/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:11.182 [114/239] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:11.182 [115/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:11.441 [116/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:11.441 [117/239] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:11.441 [118/239] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:11.441 [119/239] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:11.441 [120/239] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:11.441 [121/239] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:11.441 [122/239] Linking static target lib/librte_pci.a 00:15:11.441 [123/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:11.441 [124/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:11.441 [125/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:11.699 [126/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:11.699 [127/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:11.699 [128/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:11.699 [129/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:11.699 [130/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:11.699 [131/239] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:11.699 [132/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:11.699 [133/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:11.699 [134/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:11.699 [135/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:11.699 [136/239] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:11.699 [137/239] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:11.699 [138/239] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:11.699 [139/239] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:11.699 [140/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:11.958 [141/239] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:11.958 [142/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:11.958 [143/239] Linking static target lib/librte_cmdline.a 00:15:11.958 [144/239] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:11.958 [145/239] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:11.958 [146/239] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:11.958 [147/239] Linking static target lib/librte_ethdev.a 00:15:11.958 [148/239] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:11.958 [149/239] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:11.958 [150/239] Linking static target lib/librte_timer.a 00:15:12.216 [151/239] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:12.216 [152/239] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:12.216 [153/239] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:12.216 [154/239] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:12.216 [155/239] Linking static target lib/librte_hash.a 00:15:12.216 [156/239] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:12.216 [157/239] Linking static target lib/librte_compressdev.a 00:15:12.216 [158/239] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:12.475 [159/239] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:12.475 [160/239] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.475 [161/239] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:12.475 [162/239] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:12.475 [163/239] Linking static target lib/librte_reorder.a 00:15:12.733 [164/239] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:12.733 [165/239] Linking static target lib/librte_dmadev.a 00:15:12.733 [166/239] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:15:12.733 [167/239] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:12.733 [168/239] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:15:12.733 [169/239] Linking static target lib/librte_cryptodev.a 00:15:12.733 [170/239] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:15:12.733 [171/239] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:12.733 [172/239] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.733 [173/239] Linking static target lib/librte_stack.a 00:15:12.733 [174/239] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:12.733 [175/239] Linking static target lib/librte_security.a 00:15:12.733 [176/239] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.733 [177/239] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.992 [178/239] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:12.992 [179/239] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.992 [180/239] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.992 [181/239] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.992 [182/239] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.992 [183/239] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:12.992 [184/239] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:13.272 [185/239] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:15:13.272 [186/239] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:13.272 [187/239] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:13.272 [188/239] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:13.272 [189/239] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:13.272 [190/239] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:13.272 [191/239] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:13.530 [192/239] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:13.530 [193/239] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:13.530 [194/239] Linking static target drivers/librte_bus_vdev.a 00:15:13.530 [195/239] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:13.530 [196/239] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:13.530 [197/239] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:13.530 [198/239] Linking static target drivers/librte_bus_pci.a 00:15:13.530 [199/239] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:13.530 [200/239] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:13.530 [201/239] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:13.530 [202/239] Linking static target drivers/librte_mempool_ring.a 00:15:13.530 [203/239] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:13.530 [204/239] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:13.789 [205/239] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:14.356 [206/239] Generating kernel/freebsd/contigmem with a custom command 00:15:14.356 machine -> /usr/src/sys/amd64/include 00:15:14.356 x86 -> /usr/src/sys/x86/include 00:15:14.356 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:15:14.356 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:15:14.356 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:15:14.356 touch opt_global.h 00:15:14.356 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:15:14.357 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:15:14.357 :> export_syms 00:15:14.357 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:15:14.357 objcopy --strip-debug contigmem.ko 00:15:14.923 [207/239] Generating kernel/freebsd/nic_uio with a custom command 00:15:14.923 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:15:14.923 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:15:14.923 :> export_syms 00:15:14.923 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:15:14.923 objcopy --strip-debug nic_uio.ko 00:15:17.451 [208/239] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:20.785 [209/239] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:20.785 [210/239] Linking target lib/librte_eal.so.24.1 00:15:21.043 [211/239] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:21.043 [212/239] Linking target drivers/librte_bus_vdev.so.24.1 00:15:21.043 [213/239] Linking target lib/librte_stack.so.24.1 00:15:21.043 [214/239] Linking target lib/librte_ring.so.24.1 00:15:21.043 [215/239] Linking target lib/librte_pci.so.24.1 00:15:21.043 [216/239] Linking target lib/librte_timer.so.24.1 00:15:21.043 [217/239] Linking target lib/librte_meter.so.24.1 00:15:21.043 [218/239] Linking target lib/librte_dmadev.so.24.1 00:15:21.325 [219/239] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:21.325 [220/239] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:21.325 [221/239] Linking target lib/librte_rcu.so.24.1 00:15:21.325 [222/239] Linking target lib/librte_mempool.so.24.1 00:15:21.325 [223/239] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:21.325 [224/239] Linking target drivers/librte_bus_pci.so.24.1 00:15:21.325 [225/239] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:21.325 [226/239] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:21.325 [227/239] Linking target drivers/librte_mempool_ring.so.24.1 00:15:21.325 [228/239] Linking target lib/librte_mbuf.so.24.1 00:15:21.593 [229/239] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:21.593 [230/239] Linking target lib/librte_net.so.24.1 00:15:21.593 [231/239] Linking target lib/librte_compressdev.so.24.1 00:15:21.593 [232/239] Linking target lib/librte_cryptodev.so.24.1 00:15:21.593 [233/239] Linking target lib/librte_reorder.so.24.1 00:15:21.851 [234/239] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:21.851 [235/239] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:21.851 [236/239] Linking target lib/librte_security.so.24.1 00:15:21.851 [237/239] Linking target lib/librte_hash.so.24.1 00:15:21.851 [238/239] Linking target lib/librte_cmdline.so.24.1 00:15:21.851 [239/239] Linking target lib/librte_ethdev.so.24.1 00:15:21.851 INFO: autodetecting backend as ninja 00:15:21.851 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:22.783 CC lib/log/log_flags.o 00:15:22.783 CC lib/log/log_deprecated.o 00:15:22.783 CC lib/log/log.o 00:15:22.783 CC lib/ut/ut.o 00:15:22.783 CC lib/ut_mock/mock.o 00:15:22.783 LIB libspdk_ut_mock.a 00:15:22.783 LIB libspdk_log.a 00:15:22.783 LIB libspdk_ut.a 00:15:22.783 CC lib/util/base64.o 00:15:22.783 CC lib/util/cpuset.o 00:15:22.783 CC lib/util/bit_array.o 00:15:22.783 CC lib/util/crc16.o 00:15:22.783 CC lib/util/crc32.o 00:15:22.783 CC lib/util/crc32c.o 00:15:22.783 CXX lib/trace_parser/trace.o 00:15:22.783 CC lib/util/crc32_ieee.o 00:15:22.783 CC lib/dma/dma.o 00:15:22.783 CC lib/ioat/ioat.o 00:15:23.041 CC lib/util/crc64.o 00:15:23.041 CC lib/util/dif.o 00:15:23.041 CC lib/util/fd.o 00:15:23.041 CC lib/util/file.o 00:15:23.041 CC lib/util/hexlify.o 00:15:23.041 CC lib/util/iov.o 00:15:23.041 LIB libspdk_dma.a 00:15:23.041 CC lib/util/math.o 00:15:23.041 CC lib/util/pipe.o 00:15:23.041 LIB libspdk_ioat.a 00:15:23.041 CC lib/util/strerror_tls.o 00:15:23.041 CC lib/util/string.o 00:15:23.041 CC lib/util/uuid.o 00:15:23.041 CC lib/util/fd_group.o 00:15:23.041 CC lib/util/xor.o 00:15:23.041 CC lib/util/zipf.o 00:15:23.041 LIB libspdk_util.a 00:15:23.299 CC lib/conf/conf.o 00:15:23.299 CC lib/rdma/common.o 00:15:23.299 CC lib/rdma/rdma_verbs.o 00:15:23.299 CC lib/env_dpdk/env.o 00:15:23.299 CC lib/env_dpdk/memory.o 00:15:23.299 CC lib/env_dpdk/pci.o 00:15:23.299 CC lib/vmd/vmd.o 00:15:23.299 CC lib/idxd/idxd.o 00:15:23.299 CC lib/json/json_parse.o 00:15:23.299 LIB libspdk_conf.a 00:15:23.299 CC lib/env_dpdk/init.o 00:15:23.299 CC lib/env_dpdk/threads.o 00:15:23.299 CC lib/json/json_util.o 00:15:23.299 LIB libspdk_rdma.a 00:15:23.299 CC lib/env_dpdk/pci_ioat.o 00:15:23.299 CC lib/idxd/idxd_user.o 00:15:23.556 CC lib/vmd/led.o 00:15:23.556 CC lib/env_dpdk/pci_virtio.o 00:15:23.556 CC lib/env_dpdk/pci_vmd.o 00:15:23.556 CC lib/json/json_write.o 00:15:23.556 LIB libspdk_idxd.a 00:15:23.556 CC lib/env_dpdk/pci_idxd.o 00:15:23.556 CC lib/env_dpdk/pci_event.o 00:15:23.556 CC lib/env_dpdk/sigbus_handler.o 00:15:23.556 LIB libspdk_vmd.a 00:15:23.556 CC lib/env_dpdk/pci_dpdk.o 00:15:23.556 CC lib/env_dpdk/pci_dpdk_2207.o 00:15:23.556 CC lib/env_dpdk/pci_dpdk_2211.o 00:15:23.556 LIB libspdk_json.a 00:15:23.815 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:23.815 CC lib/jsonrpc/jsonrpc_client.o 00:15:23.815 CC lib/jsonrpc/jsonrpc_server.o 00:15:23.815 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:23.815 LIB libspdk_jsonrpc.a 00:15:23.815 LIB libspdk_trace_parser.a 00:15:23.815 LIB libspdk_env_dpdk.a 00:15:24.073 CC lib/rpc/rpc.o 00:15:24.073 LIB libspdk_rpc.a 00:15:24.073 CC lib/trace/trace.o 00:15:24.073 CC lib/trace/trace_flags.o 00:15:24.073 CC lib/trace/trace_rpc.o 00:15:24.073 CC lib/keyring/keyring.o 00:15:24.073 CC lib/keyring/keyring_rpc.o 00:15:24.073 CC lib/notify/notify.o 00:15:24.073 CC lib/notify/notify_rpc.o 00:15:24.332 LIB libspdk_notify.a 00:15:24.332 LIB libspdk_keyring.a 00:15:24.332 LIB libspdk_trace.a 00:15:24.332 CC lib/thread/thread.o 00:15:24.332 CC lib/thread/iobuf.o 00:15:24.332 CC lib/sock/sock_rpc.o 00:15:24.332 CC lib/sock/sock.o 00:15:24.590 LIB libspdk_sock.a 00:15:24.591 CC lib/nvme/nvme_ctrlr_cmd.o 00:15:24.591 CC lib/nvme/nvme_ctrlr.o 00:15:24.591 CC lib/nvme/nvme_fabric.o 00:15:24.591 CC lib/nvme/nvme_ns_cmd.o 00:15:24.591 CC lib/nvme/nvme_ns.o 00:15:24.591 CC lib/nvme/nvme_pcie.o 00:15:24.591 CC lib/nvme/nvme_pcie_common.o 00:15:24.591 CC lib/nvme/nvme_qpair.o 00:15:24.591 CC lib/nvme/nvme.o 00:15:24.848 LIB libspdk_thread.a 00:15:24.848 CC lib/nvme/nvme_quirks.o 00:15:25.107 CC lib/nvme/nvme_transport.o 00:15:25.107 CC lib/nvme/nvme_discovery.o 00:15:25.107 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:15:25.107 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:15:25.107 CC lib/accel/accel.o 00:15:25.107 CC lib/init/json_config.o 00:15:25.107 CC lib/blob/blobstore.o 00:15:25.107 CC lib/init/subsystem.o 00:15:25.107 CC lib/blob/request.o 00:15:25.364 CC lib/nvme/nvme_tcp.o 00:15:25.364 CC lib/init/subsystem_rpc.o 00:15:25.364 CC lib/accel/accel_rpc.o 00:15:25.364 CC lib/init/rpc.o 00:15:25.364 CC lib/nvme/nvme_opal.o 00:15:25.364 LIB libspdk_init.a 00:15:25.364 CC lib/accel/accel_sw.o 00:15:25.364 CC lib/blob/zeroes.o 00:15:25.364 CC lib/blob/blob_bs_dev.o 00:15:25.364 CC lib/nvme/nvme_io_msg.o 00:15:25.364 CC lib/nvme/nvme_poll_group.o 00:15:25.622 CC lib/nvme/nvme_zns.o 00:15:25.622 LIB libspdk_accel.a 00:15:25.622 CC lib/nvme/nvme_stubs.o 00:15:25.622 CC lib/nvme/nvme_auth.o 00:15:25.622 CC lib/nvme/nvme_rdma.o 00:15:25.622 CC lib/event/app.o 00:15:25.622 CC lib/bdev/bdev.o 00:15:25.622 CC lib/event/reactor.o 00:15:25.622 LIB libspdk_blob.a 00:15:25.622 CC lib/bdev/bdev_rpc.o 00:15:25.880 CC lib/event/log_rpc.o 00:15:25.880 CC lib/event/app_rpc.o 00:15:25.880 CC lib/event/scheduler_static.o 00:15:25.880 CC lib/bdev/bdev_zone.o 00:15:25.880 CC lib/blobfs/blobfs.o 00:15:25.880 CC lib/bdev/part.o 00:15:25.880 CC lib/lvol/lvol.o 00:15:25.880 CC lib/blobfs/tree.o 00:15:25.880 CC lib/bdev/scsi_nvme.o 00:15:25.880 LIB libspdk_event.a 00:15:26.137 LIB libspdk_nvme.a 00:15:26.137 LIB libspdk_blobfs.a 00:15:26.137 LIB libspdk_lvol.a 00:15:26.137 LIB libspdk_bdev.a 00:15:26.138 CC lib/scsi/dev.o 00:15:26.138 CC lib/nvmf/ctrlr.o 00:15:26.138 CC lib/scsi/lun.o 00:15:26.138 CC lib/nvmf/ctrlr_discovery.o 00:15:26.138 CC lib/scsi/port.o 00:15:26.138 CC lib/nvmf/ctrlr_bdev.o 00:15:26.138 CC lib/scsi/scsi.o 00:15:26.138 CC lib/nvmf/nvmf.o 00:15:26.138 CC lib/nvmf/subsystem.o 00:15:26.138 CC lib/scsi/scsi_bdev.o 00:15:26.396 CC lib/nvmf/nvmf_rpc.o 00:15:26.396 CC lib/nvmf/transport.o 00:15:26.396 CC lib/scsi/scsi_pr.o 00:15:26.396 CC lib/nvmf/tcp.o 00:15:26.396 CC lib/nvmf/stubs.o 00:15:26.396 CC lib/nvmf/mdns_server.o 00:15:26.396 CC lib/scsi/scsi_rpc.o 00:15:26.396 CC lib/scsi/task.o 00:15:26.396 CC lib/nvmf/rdma.o 00:15:26.396 CC lib/nvmf/auth.o 00:15:26.654 LIB libspdk_scsi.a 00:15:26.654 CC lib/iscsi/conn.o 00:15:26.654 CC lib/iscsi/init_grp.o 00:15:26.654 CC lib/iscsi/iscsi.o 00:15:26.654 CC lib/iscsi/param.o 00:15:26.654 CC lib/iscsi/md5.o 00:15:26.654 CC lib/iscsi/portal_grp.o 00:15:26.654 CC lib/iscsi/tgt_node.o 00:15:26.654 CC lib/iscsi/iscsi_subsystem.o 00:15:26.654 CC lib/iscsi/iscsi_rpc.o 00:15:26.911 CC lib/iscsi/task.o 00:15:26.911 LIB libspdk_nvmf.a 00:15:27.168 LIB libspdk_iscsi.a 00:15:27.168 CC module/env_dpdk/env_dpdk_rpc.o 00:15:27.168 CC module/accel/error/accel_error.o 00:15:27.168 CC module/accel/error/accel_error_rpc.o 00:15:27.168 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:27.168 CC module/accel/ioat/accel_ioat.o 00:15:27.169 CC module/accel/dsa/accel_dsa.o 00:15:27.169 CC module/sock/posix/posix.o 00:15:27.169 CC module/accel/iaa/accel_iaa.o 00:15:27.169 CC module/blob/bdev/blob_bdev.o 00:15:27.428 CC module/keyring/file/keyring.o 00:15:27.428 LIB libspdk_env_dpdk_rpc.a 00:15:27.428 CC module/accel/dsa/accel_dsa_rpc.o 00:15:27.428 CC module/accel/ioat/accel_ioat_rpc.o 00:15:27.428 CC module/accel/iaa/accel_iaa_rpc.o 00:15:27.428 LIB libspdk_accel_error.a 00:15:27.428 CC module/keyring/file/keyring_rpc.o 00:15:27.428 LIB libspdk_scheduler_dynamic.a 00:15:27.428 LIB libspdk_accel_ioat.a 00:15:27.428 LIB libspdk_accel_dsa.a 00:15:27.428 LIB libspdk_blob_bdev.a 00:15:27.428 LIB libspdk_accel_iaa.a 00:15:27.428 LIB libspdk_keyring_file.a 00:15:27.428 LIB libspdk_sock_posix.a 00:15:27.428 CC module/blobfs/bdev/blobfs_bdev.o 00:15:27.428 CC module/bdev/lvol/vbdev_lvol.o 00:15:27.428 CC module/bdev/passthru/vbdev_passthru.o 00:15:27.428 CC module/bdev/error/vbdev_error.o 00:15:27.428 CC module/bdev/nvme/bdev_nvme.o 00:15:27.428 CC module/bdev/gpt/gpt.o 00:15:27.428 CC module/bdev/delay/vbdev_delay.o 00:15:27.686 CC module/bdev/malloc/bdev_malloc.o 00:15:27.686 CC module/bdev/null/bdev_null.o 00:15:27.686 CC module/bdev/raid/bdev_raid.o 00:15:27.686 CC module/bdev/gpt/vbdev_gpt.o 00:15:27.686 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:27.686 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:27.686 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:27.686 CC module/bdev/null/bdev_null_rpc.o 00:15:27.686 CC module/bdev/error/vbdev_error_rpc.o 00:15:27.686 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:27.686 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:27.686 LIB libspdk_blobfs_bdev.a 00:15:27.686 LIB libspdk_bdev_passthru.a 00:15:27.686 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:27.686 CC module/bdev/nvme/nvme_rpc.o 00:15:27.686 LIB libspdk_bdev_delay.a 00:15:27.686 CC module/bdev/raid/bdev_raid_rpc.o 00:15:27.686 LIB libspdk_bdev_error.a 00:15:27.686 LIB libspdk_bdev_null.a 00:15:27.686 LIB libspdk_bdev_gpt.a 00:15:27.951 CC module/bdev/raid/bdev_raid_sb.o 00:15:27.951 CC module/bdev/nvme/bdev_mdns_client.o 00:15:27.951 CC module/bdev/raid/raid0.o 00:15:27.951 LIB libspdk_bdev_malloc.a 00:15:27.951 CC module/bdev/raid/raid1.o 00:15:27.951 LIB libspdk_bdev_lvol.a 00:15:27.951 CC module/bdev/raid/concat.o 00:15:27.951 CC module/bdev/split/vbdev_split.o 00:15:27.951 CC module/bdev/split/vbdev_split_rpc.o 00:15:27.951 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:27.951 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:27.951 CC module/bdev/aio/bdev_aio_rpc.o 00:15:27.951 CC module/bdev/aio/bdev_aio.o 00:15:27.951 LIB libspdk_bdev_nvme.a 00:15:27.951 LIB libspdk_bdev_raid.a 00:15:27.951 LIB libspdk_bdev_split.a 00:15:28.210 LIB libspdk_bdev_zone_block.a 00:15:28.210 LIB libspdk_bdev_aio.a 00:15:28.210 CC module/event/subsystems/vmd/vmd.o 00:15:28.210 CC module/event/subsystems/keyring/keyring.o 00:15:28.210 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:28.210 CC module/event/subsystems/sock/sock.o 00:15:28.210 CC module/event/subsystems/scheduler/scheduler.o 00:15:28.210 CC module/event/subsystems/iobuf/iobuf.o 00:15:28.210 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:28.483 LIB libspdk_event_keyring.a 00:15:28.483 LIB libspdk_event_sock.a 00:15:28.483 LIB libspdk_event_vmd.a 00:15:28.483 LIB libspdk_event_scheduler.a 00:15:28.483 LIB libspdk_event_iobuf.a 00:15:28.483 CC module/event/subsystems/accel/accel.o 00:15:28.742 LIB libspdk_event_accel.a 00:15:28.742 CC module/event/subsystems/bdev/bdev.o 00:15:28.742 LIB libspdk_event_bdev.a 00:15:29.007 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:29.007 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:29.007 CC module/event/subsystems/scsi/scsi.o 00:15:29.007 LIB libspdk_event_scsi.a 00:15:29.007 LIB libspdk_event_nvmf.a 00:15:29.265 CC module/event/subsystems/iscsi/iscsi.o 00:15:29.265 LIB libspdk_event_iscsi.a 00:15:29.524 CC app/trace_record/trace_record.o 00:15:29.524 CXX app/trace/trace.o 00:15:29.524 CC app/iscsi_tgt/iscsi_tgt.o 00:15:29.524 CC app/spdk_tgt/spdk_tgt.o 00:15:29.524 CC examples/accel/perf/accel_perf.o 00:15:29.524 CC app/nvmf_tgt/nvmf_main.o 00:15:29.524 CC test/accel/dif/dif.o 00:15:29.524 CC examples/bdev/hello_world/hello_bdev.o 00:15:29.524 CC test/app/bdev_svc/bdev_svc.o 00:15:29.524 LINK spdk_trace_record 00:15:29.524 CC examples/blob/hello_world/hello_blob.o 00:15:29.524 LINK nvmf_tgt 00:15:29.524 LINK accel_perf 00:15:29.524 LINK spdk_tgt 00:15:29.524 LINK hello_bdev 00:15:29.524 LINK bdev_svc 00:15:29.524 CC examples/ioat/perf/perf.o 00:15:29.524 LINK iscsi_tgt 00:15:29.524 LINK dif 00:15:29.524 LINK hello_blob 00:15:29.791 LINK ioat_perf 00:15:29.791 CC examples/bdev/bdevperf/bdevperf.o 00:15:29.791 TEST_HEADER include/spdk/accel.h 00:15:29.791 CC test/bdev/bdevio/bdevio.o 00:15:29.791 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:15:29.791 TEST_HEADER include/spdk/accel_module.h 00:15:29.791 TEST_HEADER include/spdk/assert.h 00:15:29.791 TEST_HEADER include/spdk/barrier.h 00:15:29.791 TEST_HEADER include/spdk/base64.h 00:15:29.791 CC test/app/histogram_perf/histogram_perf.o 00:15:29.791 TEST_HEADER include/spdk/bdev.h 00:15:29.791 TEST_HEADER include/spdk/bdev_module.h 00:15:29.791 CC examples/blob/cli/blobcli.o 00:15:29.791 TEST_HEADER include/spdk/bdev_zone.h 00:15:29.791 TEST_HEADER include/spdk/bit_array.h 00:15:29.791 TEST_HEADER include/spdk/bit_pool.h 00:15:29.791 TEST_HEADER include/spdk/blob.h 00:15:29.791 TEST_HEADER include/spdk/blob_bdev.h 00:15:29.791 TEST_HEADER include/spdk/blobfs.h 00:15:29.791 CC test/blobfs/mkfs/mkfs.o 00:15:29.791 TEST_HEADER include/spdk/blobfs_bdev.h 00:15:29.791 TEST_HEADER include/spdk/conf.h 00:15:29.791 TEST_HEADER include/spdk/config.h 00:15:29.791 TEST_HEADER include/spdk/cpuset.h 00:15:29.791 TEST_HEADER include/spdk/crc16.h 00:15:29.791 TEST_HEADER include/spdk/crc32.h 00:15:29.791 TEST_HEADER include/spdk/crc64.h 00:15:29.791 TEST_HEADER include/spdk/dif.h 00:15:29.791 TEST_HEADER include/spdk/dma.h 00:15:29.791 TEST_HEADER include/spdk/endian.h 00:15:29.791 TEST_HEADER include/spdk/env.h 00:15:29.791 TEST_HEADER include/spdk/env_dpdk.h 00:15:29.791 TEST_HEADER include/spdk/event.h 00:15:29.791 TEST_HEADER include/spdk/fd.h 00:15:29.791 CC test/app/jsoncat/jsoncat.o 00:15:29.791 TEST_HEADER include/spdk/fd_group.h 00:15:29.791 TEST_HEADER include/spdk/file.h 00:15:29.791 TEST_HEADER include/spdk/ftl.h 00:15:29.791 TEST_HEADER include/spdk/gpt_spec.h 00:15:29.791 TEST_HEADER include/spdk/hexlify.h 00:15:30.050 TEST_HEADER include/spdk/histogram_data.h 00:15:30.050 TEST_HEADER include/spdk/idxd.h 00:15:30.050 CC examples/ioat/verify/verify.o 00:15:30.050 TEST_HEADER include/spdk/idxd_spec.h 00:15:30.050 TEST_HEADER include/spdk/init.h 00:15:30.050 TEST_HEADER include/spdk/ioat.h 00:15:30.050 TEST_HEADER include/spdk/ioat_spec.h 00:15:30.050 TEST_HEADER include/spdk/iscsi_spec.h 00:15:30.050 TEST_HEADER include/spdk/json.h 00:15:30.050 TEST_HEADER include/spdk/jsonrpc.h 00:15:30.050 TEST_HEADER include/spdk/keyring.h 00:15:30.050 TEST_HEADER include/spdk/keyring_module.h 00:15:30.050 TEST_HEADER include/spdk/likely.h 00:15:30.050 TEST_HEADER include/spdk/log.h 00:15:30.050 TEST_HEADER include/spdk/lvol.h 00:15:30.050 TEST_HEADER include/spdk/memory.h 00:15:30.050 TEST_HEADER include/spdk/mmio.h 00:15:30.050 TEST_HEADER include/spdk/nbd.h 00:15:30.050 TEST_HEADER include/spdk/notify.h 00:15:30.050 TEST_HEADER include/spdk/nvme.h 00:15:30.050 TEST_HEADER include/spdk/nvme_intel.h 00:15:30.050 TEST_HEADER include/spdk/nvme_ocssd.h 00:15:30.050 LINK spdk_trace 00:15:30.050 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:15:30.050 TEST_HEADER include/spdk/nvme_spec.h 00:15:30.050 TEST_HEADER include/spdk/nvme_zns.h 00:15:30.050 TEST_HEADER include/spdk/nvmf.h 00:15:30.050 TEST_HEADER include/spdk/nvmf_cmd.h 00:15:30.050 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:15:30.050 TEST_HEADER include/spdk/nvmf_spec.h 00:15:30.050 TEST_HEADER include/spdk/nvmf_transport.h 00:15:30.050 TEST_HEADER include/spdk/opal.h 00:15:30.050 TEST_HEADER include/spdk/opal_spec.h 00:15:30.050 LINK jsoncat 00:15:30.050 TEST_HEADER include/spdk/pci_ids.h 00:15:30.050 TEST_HEADER include/spdk/pipe.h 00:15:30.050 LINK histogram_perf 00:15:30.050 TEST_HEADER include/spdk/queue.h 00:15:30.050 TEST_HEADER include/spdk/reduce.h 00:15:30.050 TEST_HEADER include/spdk/rpc.h 00:15:30.050 TEST_HEADER include/spdk/scheduler.h 00:15:30.050 LINK mkfs 00:15:30.050 TEST_HEADER include/spdk/scsi.h 00:15:30.050 TEST_HEADER include/spdk/scsi_spec.h 00:15:30.050 TEST_HEADER include/spdk/sock.h 00:15:30.050 TEST_HEADER include/spdk/stdinc.h 00:15:30.050 TEST_HEADER include/spdk/string.h 00:15:30.050 TEST_HEADER include/spdk/thread.h 00:15:30.050 TEST_HEADER include/spdk/trace.h 00:15:30.050 TEST_HEADER include/spdk/trace_parser.h 00:15:30.050 TEST_HEADER include/spdk/tree.h 00:15:30.050 TEST_HEADER include/spdk/ublk.h 00:15:30.050 TEST_HEADER include/spdk/util.h 00:15:30.050 TEST_HEADER include/spdk/uuid.h 00:15:30.050 TEST_HEADER include/spdk/version.h 00:15:30.050 TEST_HEADER include/spdk/vfio_user_pci.h 00:15:30.050 TEST_HEADER include/spdk/vfio_user_spec.h 00:15:30.050 TEST_HEADER include/spdk/vhost.h 00:15:30.050 TEST_HEADER include/spdk/vmd.h 00:15:30.050 TEST_HEADER include/spdk/xor.h 00:15:30.050 TEST_HEADER include/spdk/zipf.h 00:15:30.050 CXX test/cpp_headers/accel.o 00:15:30.050 LINK nvme_fuzz 00:15:30.050 LINK verify 00:15:30.050 CC app/spdk_lspci/spdk_lspci.o 00:15:30.051 CXX test/cpp_headers/accel_module.o 00:15:30.051 LINK bdevperf 00:15:30.051 LINK bdevio 00:15:30.051 LINK blobcli 00:15:30.051 CXX test/cpp_headers/assert.o 00:15:30.051 LINK spdk_lspci 00:15:30.051 CC examples/nvme/hello_world/hello_world.o 00:15:30.051 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:15:30.051 CC test/app/stub/stub.o 00:15:30.310 CXX test/cpp_headers/barrier.o 00:15:30.310 CXX test/cpp_headers/base64.o 00:15:30.310 CC app/spdk_nvme_perf/perf.o 00:15:30.310 CC examples/sock/hello_world/hello_sock.o 00:15:30.310 LINK hello_world 00:15:30.310 CC examples/vmd/lsvmd/lsvmd.o 00:15:30.310 CC examples/nvme/reconnect/reconnect.o 00:15:30.310 LINK stub 00:15:30.310 CC test/dma/test_dma/test_dma.o 00:15:30.310 LINK lsvmd 00:15:30.310 CXX test/cpp_headers/bdev.o 00:15:30.310 LINK hello_sock 00:15:30.310 CC app/spdk_nvme_identify/identify.o 00:15:30.310 LINK reconnect 00:15:30.310 CC examples/vmd/led/led.o 00:15:30.310 CXX test/cpp_headers/bdev_module.o 00:15:30.569 CC examples/nvmf/nvmf/nvmf.o 00:15:30.569 LINK spdk_nvme_perf 00:15:30.569 LINK led 00:15:30.569 LINK test_dma 00:15:30.569 CC app/spdk_nvme_discover/discovery_aer.o 00:15:30.569 CC examples/nvme/nvme_manage/nvme_manage.o 00:15:30.569 LINK iscsi_fuzz 00:15:30.569 CXX test/cpp_headers/bdev_zone.o 00:15:30.569 CC examples/util/zipf/zipf.o 00:15:30.569 LINK spdk_nvme_discover 00:15:30.569 LINK spdk_nvme_identify 00:15:30.569 LINK nvmf 00:15:30.569 CC examples/nvme/arbitration/arbitration.o 00:15:30.569 CC test/env/mem_callbacks/mem_callbacks.o 00:15:30.569 LINK zipf 00:15:30.569 LINK nvme_manage 00:15:30.827 CXX test/cpp_headers/bit_array.o 00:15:30.827 CC examples/idxd/perf/perf.o 00:15:30.827 CC examples/thread/thread/thread_ex.o 00:15:30.827 CXX test/cpp_headers/bit_pool.o 00:15:30.827 CC app/spdk_top/spdk_top.o 00:15:30.827 CXX test/cpp_headers/blob.o 00:15:30.827 CC test/env/vtophys/vtophys.o 00:15:30.827 CC test/event/event_perf/event_perf.o 00:15:30.827 LINK arbitration 00:15:30.827 LINK idxd_perf 00:15:30.827 LINK thread 00:15:30.827 LINK vtophys 00:15:30.827 LINK event_perf 00:15:30.827 CC test/event/reactor/reactor.o 00:15:30.827 CXX test/cpp_headers/blob_bdev.o 00:15:30.827 CC test/event/reactor_perf/reactor_perf.o 00:15:30.827 CC examples/nvme/hotplug/hotplug.o 00:15:30.827 CXX test/cpp_headers/blobfs.o 00:15:30.827 CC examples/nvme/cmb_copy/cmb_copy.o 00:15:31.084 LINK reactor 00:15:31.084 CC examples/nvme/abort/abort.o 00:15:31.084 LINK spdk_top 00:15:31.084 LINK reactor_perf 00:15:31.084 CC app/fio/nvme/fio_plugin.o 00:15:31.084 CXX test/cpp_headers/blobfs_bdev.o 00:15:31.084 LINK cmb_copy 00:15:31.084 LINK hotplug 00:15:31.084 LINK mem_callbacks 00:15:31.084 CXX test/cpp_headers/conf.o 00:15:31.084 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:15:31.084 LINK abort 00:15:31.084 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:15:31.084 CC app/fio/bdev/fio_plugin.o 00:15:31.084 gmake[2]: Nothing to be done for 'all'. 00:15:31.084 CC test/env/memory/memory_ut.o 00:15:31.084 LINK pmr_persistence 00:15:31.084 CC test/env/pci/pci_ut.o 00:15:31.084 CXX test/cpp_headers/config.o 00:15:31.343 CXX test/cpp_headers/cpuset.o 00:15:31.343 LINK env_dpdk_post_init 00:15:31.343 CC test/nvme/aer/aer.o 00:15:31.343 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:15:31.343 struct spdk_nvme_fdp_ruhs ruhs; 00:15:31.343 ^ 00:15:31.343 CC test/rpc_client/rpc_client_test.o 00:15:31.343 CC test/nvme/reset/reset.o 00:15:31.343 1 warning generated. 00:15:31.343 LINK spdk_nvme 00:15:31.343 CXX test/cpp_headers/crc16.o 00:15:31.343 LINK pci_ut 00:15:31.343 CC test/thread/poller_perf/poller_perf.o 00:15:31.343 LINK rpc_client_test 00:15:31.343 LINK spdk_bdev 00:15:31.343 LINK aer 00:15:31.343 LINK reset 00:15:31.343 CC test/thread/lock/spdk_lock.o 00:15:31.343 CXX test/cpp_headers/crc32.o 00:15:31.343 CXX test/cpp_headers/crc64.o 00:15:31.343 CC test/nvme/sgl/sgl.o 00:15:31.343 LINK poller_perf 00:15:31.343 CXX test/cpp_headers/dif.o 00:15:31.603 CC test/nvme/e2edp/nvme_dp.o 00:15:31.603 CXX test/cpp_headers/dma.o 00:15:31.603 LINK sgl 00:15:31.603 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:15:31.603 CC test/unit/lib/accel/accel.c/accel_ut.o 00:15:31.603 CXX test/cpp_headers/endian.o 00:15:31.603 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:15:31.603 LINK nvme_dp 00:15:31.603 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:15:31.603 LINK memory_ut 00:15:31.603 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:15:31.603 LINK histogram_ut 00:15:31.603 LINK spdk_lock 00:15:31.603 CC test/unit/lib/blob/blob.c/blob_ut.o 00:15:31.603 CC test/nvme/overhead/overhead.o 00:15:31.861 LINK tree_ut 00:15:31.861 CXX test/cpp_headers/env.o 00:15:31.861 CC test/nvme/err_injection/err_injection.o 00:15:31.861 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:15:31.861 CC test/unit/lib/dma/dma.c/dma_ut.o 00:15:31.861 LINK blob_bdev_ut 00:15:31.861 LINK overhead 00:15:31.861 LINK err_injection 00:15:31.861 CC test/unit/lib/event/app.c/app_ut.o 00:15:31.861 CXX test/cpp_headers/env_dpdk.o 00:15:31.861 CXX test/cpp_headers/event.o 00:15:31.861 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:15:31.861 LINK dma_ut 00:15:31.861 CC test/nvme/startup/startup.o 00:15:32.120 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:15:32.120 LINK app_ut 00:15:32.120 LINK ioat_ut 00:15:32.120 LINK startup 00:15:32.120 CXX test/cpp_headers/fd.o 00:15:32.120 LINK blobfs_async_ut 00:15:32.120 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:15:32.120 CC test/unit/lib/bdev/part.c/part_ut.o 00:15:32.120 LINK accel_ut 00:15:32.120 CXX test/cpp_headers/fd_group.o 00:15:32.120 CC test/nvme/reserve/reserve.o 00:15:32.120 CXX test/cpp_headers/file.o 00:15:32.120 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:15:32.378 CXX test/cpp_headers/ftl.o 00:15:32.378 LINK reserve 00:15:32.378 LINK blobfs_sync_ut 00:15:32.378 LINK reactor_ut 00:15:32.378 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:15:32.378 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:15:32.378 CXX test/cpp_headers/gpt_spec.o 00:15:32.378 CC test/nvme/simple_copy/simple_copy.o 00:15:32.378 LINK blobfs_bdev_ut 00:15:32.378 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:15:32.378 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:15:32.378 LINK simple_copy 00:15:32.378 LINK conn_ut 00:15:32.378 CXX test/cpp_headers/hexlify.o 00:15:32.378 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:15:32.636 LINK jsonrpc_server_ut 00:15:32.636 LINK scsi_nvme_ut 00:15:32.636 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:15:32.636 CC test/nvme/connect_stress/connect_stress.o 00:15:32.636 CXX test/cpp_headers/histogram_data.o 00:15:32.636 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:15:32.636 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:15:32.636 LINK json_util_ut 00:15:32.636 LINK connect_stress 00:15:32.636 LINK init_grp_ut 00:15:32.636 CXX test/cpp_headers/idxd.o 00:15:32.636 LINK bdev_ut 00:15:32.636 CC test/unit/lib/iscsi/param.c/param_ut.o 00:15:32.894 CXX test/cpp_headers/idxd_spec.o 00:15:32.894 LINK part_ut 00:15:32.894 LINK json_parse_ut 00:15:32.894 CC test/nvme/boot_partition/boot_partition.o 00:15:32.894 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:15:32.894 CXX test/cpp_headers/init.o 00:15:32.894 CC test/nvme/compliance/nvme_compliance.o 00:15:32.894 CC test/nvme/fused_ordering/fused_ordering.o 00:15:32.894 LINK boot_partition 00:15:32.894 LINK param_ut 00:15:32.894 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:15:32.894 LINK gpt_ut 00:15:32.894 CXX test/cpp_headers/ioat.o 00:15:32.894 LINK fused_ordering 00:15:32.894 CC test/nvme/doorbell_aers/doorbell_aers.o 00:15:33.152 LINK json_write_ut 00:15:33.152 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:15:33.152 LINK nvme_compliance 00:15:33.152 CXX test/cpp_headers/ioat_spec.o 00:15:33.152 CC test/unit/lib/log/log.c/log_ut.o 00:15:33.152 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:15:33.152 LINK doorbell_aers 00:15:33.152 CXX test/cpp_headers/iscsi_spec.o 00:15:33.152 LINK blob_ut 00:15:33.152 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:15:33.152 LINK log_ut 00:15:33.152 LINK iscsi_ut 00:15:33.152 LINK portal_grp_ut 00:15:33.152 CC test/nvme/fdp/fdp.o 00:15:33.152 CXX test/cpp_headers/json.o 00:15:33.152 CXX test/cpp_headers/jsonrpc.o 00:15:33.409 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:15:33.409 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:15:33.409 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:15:33.409 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:15:33.409 LINK fdp 00:15:33.409 LINK tgt_node_ut 00:15:33.409 CXX test/cpp_headers/keyring.o 00:15:33.409 CC test/unit/lib/notify/notify.c/notify_ut.o 00:15:33.668 LINK bdev_raid_sb_ut 00:15:33.668 LINK bdev_zone_ut 00:15:33.668 LINK vbdev_lvol_ut 00:15:33.668 CXX test/cpp_headers/keyring_module.o 00:15:33.668 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:15:33.668 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:15:33.668 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:15:33.668 LINK bdev_raid_ut 00:15:33.668 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:15:33.668 LINK notify_ut 00:15:33.668 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:15:33.668 CXX test/cpp_headers/likely.o 00:15:33.668 LINK lvol_ut 00:15:33.926 LINK concat_ut 00:15:33.926 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:15:33.926 CXX test/cpp_headers/log.o 00:15:33.926 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:15:33.926 LINK raid1_ut 00:15:33.926 CXX test/cpp_headers/lvol.o 00:15:33.926 LINK vbdev_zone_block_ut 00:15:33.926 LINK bdev_ut 00:15:33.926 LINK raid0_ut 00:15:33.926 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:15:33.926 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:15:33.926 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:15:33.926 CXX test/cpp_headers/memory.o 00:15:33.926 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:15:33.926 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:15:33.926 CC test/unit/lib/sock/sock.c/sock_ut.o 00:15:34.185 CXX test/cpp_headers/mmio.o 00:15:34.185 LINK dev_ut 00:15:34.185 LINK nvme_ut 00:15:34.185 LINK lun_ut 00:15:34.185 CC test/unit/lib/sock/posix.c/posix_ut.o 00:15:34.185 CXX test/cpp_headers/nbd.o 00:15:34.185 CXX test/cpp_headers/notify.o 00:15:34.185 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:15:34.185 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:15:34.442 CXX test/cpp_headers/nvme.o 00:15:34.442 LINK scsi_ut 00:15:34.442 LINK sock_ut 00:15:34.443 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:15:34.443 CXX test/cpp_headers/nvme_intel.o 00:15:34.443 LINK subsystem_ut 00:15:34.443 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:15:34.443 LINK posix_ut 00:15:34.443 LINK ctrlr_ut 00:15:34.701 CC test/unit/lib/thread/thread.c/thread_ut.o 00:15:34.701 CXX test/cpp_headers/nvme_ocssd.o 00:15:34.701 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:15:34.701 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:15:34.701 LINK nvme_ctrlr_ut 00:15:34.701 LINK tcp_ut 00:15:34.701 LINK ctrlr_discovery_ut 00:15:34.701 CXX test/cpp_headers/nvme_ocssd_spec.o 00:15:34.701 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:15:34.701 LINK ctrlr_bdev_ut 00:15:34.701 LINK iobuf_ut 00:15:34.701 LINK scsi_bdev_ut 00:15:34.701 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:15:34.959 CXX test/cpp_headers/nvme_spec.o 00:15:34.959 CC test/unit/lib/util/base64.c/base64_ut.o 00:15:34.959 LINK bdev_nvme_ut 00:15:34.959 LINK scsi_pr_ut 00:15:34.959 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:15:34.959 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:15:34.959 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:15:34.959 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:15:34.959 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:15:34.959 LINK base64_ut 00:15:34.959 CXX test/cpp_headers/nvme_zns.o 00:15:34.959 LINK pci_event_ut 00:15:34.959 CXX test/cpp_headers/nvmf.o 00:15:35.265 LINK bit_array_ut 00:15:35.265 LINK nvmf_ut 00:15:35.265 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:15:35.265 LINK thread_ut 00:15:35.265 CXX test/cpp_headers/nvmf_cmd.o 00:15:35.265 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:15:35.265 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:15:35.266 LINK auth_ut 00:15:35.266 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:15:35.266 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:15:35.266 LINK subsystem_ut 00:15:35.266 LINK nvme_ctrlr_cmd_ut 00:15:35.266 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:15:35.266 LINK cpuset_ut 00:15:35.266 CXX test/cpp_headers/nvmf_fc_spec.o 00:15:35.266 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:15:35.266 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:15:35.266 LINK crc16_ut 00:15:35.266 CXX test/cpp_headers/nvmf_spec.o 00:15:35.538 LINK rpc_ut 00:15:35.538 LINK nvme_ctrlr_ocssd_cmd_ut 00:15:35.538 LINK crc32_ieee_ut 00:15:35.538 CXX test/cpp_headers/nvmf_transport.o 00:15:35.538 LINK rpc_ut 00:15:35.538 CXX test/cpp_headers/opal.o 00:15:35.538 LINK rdma_ut 00:15:35.538 CXX test/cpp_headers/opal_spec.o 00:15:35.539 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:15:35.539 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:15:35.539 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:15:35.539 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:15:35.539 CC test/unit/lib/util/dif.c/dif_ut.o 00:15:35.539 LINK crc32c_ut 00:15:35.539 LINK keyring_ut 00:15:35.539 LINK crc64_ut 00:15:35.539 CXX test/cpp_headers/pci_ids.o 00:15:35.797 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:15:35.797 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:15:35.797 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:15:35.797 LINK idxd_user_ut 00:15:35.797 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:15:35.797 CXX test/cpp_headers/pipe.o 00:15:35.797 LINK transport_ut 00:15:35.797 LINK nvme_ns_ut 00:15:35.797 CC test/unit/lib/rdma/common.c/common_ut.o 00:15:35.797 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:15:35.797 CC test/unit/lib/util/iov.c/iov_ut.o 00:15:35.797 CC test/unit/lib/util/math.c/math_ut.o 00:15:35.797 CXX test/cpp_headers/queue.o 00:15:35.797 CXX test/cpp_headers/reduce.o 00:15:35.797 LINK dif_ut 00:15:35.797 LINK iov_ut 00:15:35.797 LINK math_ut 00:15:36.055 CXX test/cpp_headers/rpc.o 00:15:36.055 LINK common_ut 00:15:36.055 LINK idxd_ut 00:15:36.055 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:15:36.055 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:15:36.055 CXX test/cpp_headers/scheduler.o 00:15:36.056 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:15:36.056 CC test/unit/lib/util/string.c/string_ut.o 00:15:36.056 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:15:36.056 CXX test/cpp_headers/scsi.o 00:15:36.056 LINK pipe_ut 00:15:36.315 LINK string_ut 00:15:36.315 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:15:36.315 CXX test/cpp_headers/scsi_spec.o 00:15:36.315 LINK nvme_ns_ocssd_cmd_ut 00:15:36.315 CC test/unit/lib/util/xor.c/xor_ut.o 00:15:36.315 LINK nvme_pcie_ut 00:15:36.315 CXX test/cpp_headers/sock.o 00:15:36.315 LINK nvme_ns_cmd_ut 00:15:36.315 CXX test/cpp_headers/stdinc.o 00:15:36.315 LINK xor_ut 00:15:36.315 CXX test/cpp_headers/string.o 00:15:36.573 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:15:36.573 LINK nvme_poll_group_ut 00:15:36.573 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:15:36.573 LINK nvme_quirks_ut 00:15:36.573 CXX test/cpp_headers/thread.o 00:15:36.573 CXX test/cpp_headers/trace.o 00:15:36.573 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:15:36.573 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:15:36.573 LINK nvme_qpair_ut 00:15:36.573 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:15:36.831 CXX test/cpp_headers/trace_parser.o 00:15:36.831 CXX test/cpp_headers/tree.o 00:15:36.831 CXX test/cpp_headers/ublk.o 00:15:36.831 CXX test/cpp_headers/util.o 00:15:36.831 LINK nvme_transport_ut 00:15:36.831 CXX test/cpp_headers/uuid.o 00:15:36.831 CXX test/cpp_headers/version.o 00:15:36.831 LINK nvme_opal_ut 00:15:36.831 LINK nvme_tcp_ut 00:15:36.831 CXX test/cpp_headers/vfio_user_pci.o 00:15:36.831 CXX test/cpp_headers/vfio_user_spec.o 00:15:36.831 CXX test/cpp_headers/vhost.o 00:15:36.831 CXX test/cpp_headers/vmd.o 00:15:36.831 LINK nvme_io_msg_ut 00:15:36.831 CXX test/cpp_headers/xor.o 00:15:36.831 CXX test/cpp_headers/zipf.o 00:15:37.091 LINK nvme_fabric_ut 00:15:37.091 LINK nvme_pcie_common_ut 00:15:37.349 LINK nvme_rdma_ut 00:15:37.349 00:15:37.349 real 1m0.842s 00:15:37.349 user 3m50.323s 00:15:37.349 sys 0m48.412s 00:15:37.349 07:28:30 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:15:37.349 07:28:30 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:15:37.349 ************************************ 00:15:37.349 END TEST unittest_build 00:15:37.349 ************************************ 00:15:37.349 07:28:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:15:37.350 07:28:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:15:37.350 07:28:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:15:37.350 07:28:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:37.350 07:28:30 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:15:37.350 07:28:30 -- pm/common@44 -- $ pid=1311 00:15:37.350 07:28:30 -- pm/common@50 -- $ kill -TERM 1311 00:15:37.610 07:28:30 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.610 07:28:30 -- nvmf/common.sh@7 -- # uname -s 00:15:37.610 07:28:30 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:15:37.610 07:28:30 -- nvmf/common.sh@7 -- # return 0 00:15:37.610 07:28:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:15:37.610 07:28:30 -- spdk/autotest.sh@32 -- # uname -s 00:15:37.610 07:28:30 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:15:37.610 07:28:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:15:37.610 07:28:30 -- pm/common@17 -- # local monitor 00:15:37.610 07:28:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:37.610 07:28:30 -- pm/common@25 -- # sleep 1 00:15:37.610 07:28:30 -- pm/common@21 -- # date +%s 00:15:37.610 07:28:30 -- pm/common@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715844510 00:15:37.610 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715844510_collect-vmstat.pm.log 00:15:38.568 07:28:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:15:38.568 07:28:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:15:38.568 07:28:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:38.568 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.568 07:28:32 -- spdk/autotest.sh@59 -- # create_test_list 00:15:38.568 07:28:32 -- common/autotest_common.sh@744 -- # xtrace_disable 00:15:38.568 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.568 07:28:32 -- spdk/autotest.sh@61 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:15:38.568 07:28:32 -- spdk/autotest.sh@61 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:15:38.568 07:28:32 -- spdk/autotest.sh@61 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:15:38.568 07:28:32 -- spdk/autotest.sh@62 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:15:38.568 07:28:32 -- spdk/autotest.sh@63 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:15:38.568 07:28:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:15:38.568 07:28:32 -- common/autotest_common.sh@1451 -- # uname 00:15:38.568 07:28:32 -- common/autotest_common.sh@1451 -- # '[' FreeBSD = FreeBSD ']' 00:15:38.568 07:28:32 -- common/autotest_common.sh@1452 -- # kldunload contigmem.ko 00:15:38.568 kldunload: can't find file contigmem.ko 00:15:38.568 07:28:32 -- common/autotest_common.sh@1452 -- # true 00:15:38.568 07:28:32 -- common/autotest_common.sh@1453 -- # '[' -n '' ']' 00:15:38.568 07:28:32 -- common/autotest_common.sh@1459 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:15:38.568 07:28:32 -- common/autotest_common.sh@1460 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:15:38.568 07:28:32 -- common/autotest_common.sh@1461 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:15:38.568 07:28:32 -- common/autotest_common.sh@1462 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:15:38.568 07:28:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:15:38.568 07:28:32 -- common/autotest_common.sh@1471 -- # uname 00:15:38.568 07:28:32 -- common/autotest_common.sh@1471 -- # [[ FreeBSD = FreeBSD ]] 00:15:38.568 07:28:32 -- common/autotest_common.sh@1471 -- # sysctl -n kern.ipc.maxsockbuf 00:15:38.568 07:28:32 -- common/autotest_common.sh@1471 -- # (( 2097152 < 4194304 )) 00:15:38.568 07:28:32 -- common/autotest_common.sh@1472 -- # sysctl kern.ipc.maxsockbuf=4194304 00:15:38.568 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:15:38.568 07:28:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:15:38.568 07:28:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:15:38.568 07:28:32 -- spdk/autotest.sh@72 -- # hash lcov 00:15:38.568 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:15:38.568 07:28:32 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:15:38.568 07:28:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:38.568 07:28:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.568 07:28:32 -- spdk/autotest.sh@91 -- # rm -f 00:15:38.568 07:28:32 -- spdk/autotest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:38.828 kldunload: can't find file contigmem.ko 00:15:38.828 kldunload: can't find file nic_uio.ko 00:15:38.828 07:28:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:15:38.828 07:28:32 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:15:38.828 07:28:32 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:15:38.828 07:28:32 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:15:38.828 07:28:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:15:38.828 07:28:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:15:38.828 07:28:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:15:38.828 07:28:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:15:38.828 07:28:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:15:38.828 07:28:32 -- scripts/common.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:15:38.828 nvme0ns1 is not a block device 00:15:38.828 07:28:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:15:38.828 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:15:38.828 07:28:32 -- scripts/common.sh@391 -- # pt= 00:15:38.828 07:28:32 -- scripts/common.sh@392 -- # return 1 00:15:38.828 07:28:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:15:38.828 1+0 records in 00:15:38.828 1+0 records out 00:15:38.828 1048576 bytes transferred in 0.007327 secs (143116291 bytes/sec) 00:15:38.828 07:28:32 -- spdk/autotest.sh@118 -- # sync 00:15:39.396 07:28:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:15:39.396 07:28:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:15:39.396 07:28:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:15:39.961 07:28:33 -- spdk/autotest.sh@124 -- # uname -s 00:15:39.961 07:28:33 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:15:39.961 07:28:33 -- spdk/autotest.sh@128 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:39.961 Contigmem (not present) 00:15:39.961 Buffer Size: not set 00:15:39.961 Num Buffers: not set 00:15:39.961 00:15:39.961 00:15:39.961 Type BDF Vendor Device Driver 00:15:39.961 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:15:40.219 07:28:33 -- spdk/autotest.sh@130 -- # uname -s 00:15:40.219 07:28:33 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:15:40.219 07:28:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:15:40.219 07:28:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.219 07:28:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.219 07:28:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:15:40.219 07:28:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:40.219 07:28:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.219 07:28:33 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:40.219 kldunload: can't find file nic_uio.ko 00:15:40.219 hw.nic_uio.bdfs="0:16:0" 00:15:40.219 hw.contigmem.num_buffers="8" 00:15:40.219 hw.contigmem.buffer_size="268435456" 00:15:40.785 07:28:34 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:15:40.785 07:28:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.785 07:28:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.785 07:28:34 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:15:40.785 07:28:34 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:15:40.785 07:28:34 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:15:40.785 07:28:34 -- common/autotest_common.sh@1573 -- # bdfs=() 00:15:40.785 07:28:34 -- common/autotest_common.sh@1573 -- # local bdfs 00:15:40.785 07:28:34 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:15:40.785 07:28:34 -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:40.785 07:28:34 -- common/autotest_common.sh@1509 -- # local bdfs 00:15:40.785 07:28:34 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:40.785 07:28:34 -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:40.785 07:28:34 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:15:40.785 07:28:34 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:15:40.785 07:28:34 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:15:40.785 07:28:34 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:15:40.785 07:28:34 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:15:40.785 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:15:40.785 07:28:34 -- common/autotest_common.sh@1576 -- # device= 00:15:40.785 07:28:34 -- common/autotest_common.sh@1576 -- # true 00:15:40.785 07:28:34 -- common/autotest_common.sh@1577 -- # [[ '' == \0\x\0\a\5\4 ]] 00:15:40.785 07:28:34 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:15:40.785 07:28:34 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:15:40.785 07:28:34 -- common/autotest_common.sh@1589 -- # return 0 00:15:40.785 07:28:34 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:15:40.785 07:28:34 -- spdk/autotest.sh@151 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:15:40.785 07:28:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:40.785 07:28:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:40.785 07:28:34 -- common/autotest_common.sh@10 -- # set +x 00:15:40.785 ************************************ 00:15:40.785 START TEST unittest 00:15:40.785 ************************************ 00:15:40.785 07:28:34 unittest -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:15:40.785 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:15:40.785 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:40.785 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:40.785 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:15:40.785 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:15:40.785 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:15:40.785 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:40.785 ++ rpc_py=rpc_cmd 00:15:40.785 ++ set -e 00:15:40.785 ++ shopt -s nullglob 00:15:40.785 ++ shopt -s extglob 00:15:40.785 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:15:40.785 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:40.785 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:40.785 +++ CONFIG_WPDK_DIR= 00:15:40.785 +++ CONFIG_ASAN=n 00:15:40.785 +++ CONFIG_VBDEV_COMPRESS=n 00:15:40.785 +++ CONFIG_HAVE_EXECINFO_H=y 00:15:40.785 +++ CONFIG_USDT=n 00:15:40.785 +++ CONFIG_CUSTOMOCF=n 00:15:40.785 +++ CONFIG_PREFIX=/usr/local 00:15:40.785 +++ CONFIG_RBD=n 00:15:40.785 +++ CONFIG_LIBDIR= 00:15:40.785 +++ CONFIG_IDXD=y 00:15:40.785 +++ CONFIG_NVME_CUSE=n 00:15:40.785 +++ CONFIG_SMA=n 00:15:40.785 +++ CONFIG_VTUNE=n 00:15:40.785 +++ CONFIG_TSAN=n 00:15:40.785 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:40.785 +++ CONFIG_VFIO_USER_DIR= 00:15:40.785 +++ CONFIG_PGO_CAPTURE=n 00:15:40.785 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:15:40.785 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:40.785 +++ CONFIG_LTO=n 00:15:40.785 +++ CONFIG_ISCSI_INITIATOR=n 00:15:40.785 +++ CONFIG_CET=n 00:15:40.785 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:40.785 +++ CONFIG_OCF_PATH= 00:15:40.785 +++ CONFIG_RDMA_SET_TOS=y 00:15:40.785 +++ CONFIG_HAVE_ARC4RANDOM=y 00:15:40.785 +++ CONFIG_HAVE_LIBARCHIVE=n 00:15:40.785 +++ CONFIG_UBLK=n 00:15:40.785 +++ CONFIG_ISAL_CRYPTO=y 00:15:40.785 +++ CONFIG_OPENSSL_PATH= 00:15:40.785 +++ CONFIG_OCF=n 00:15:40.785 +++ CONFIG_FUSE=n 00:15:40.785 +++ CONFIG_VTUNE_DIR= 00:15:40.785 +++ CONFIG_FUZZER_LIB= 00:15:40.785 +++ CONFIG_FUZZER=n 00:15:40.785 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:40.785 +++ CONFIG_CRYPTO=n 00:15:40.785 +++ CONFIG_PGO_USE=n 00:15:40.785 +++ CONFIG_VHOST=n 00:15:40.785 +++ CONFIG_DAOS=n 00:15:40.785 +++ CONFIG_DPDK_INC_DIR= 00:15:40.785 +++ CONFIG_DAOS_DIR= 00:15:40.785 +++ CONFIG_UNIT_TESTS=y 00:15:40.785 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:15:40.785 +++ CONFIG_VIRTIO=n 00:15:40.785 +++ CONFIG_DPDK_UADK=n 00:15:40.785 +++ CONFIG_COVERAGE=n 00:15:40.785 +++ CONFIG_RDMA=y 00:15:40.786 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:40.786 +++ CONFIG_URING_PATH= 00:15:40.786 +++ CONFIG_XNVME=n 00:15:40.786 +++ CONFIG_VFIO_USER=n 00:15:40.786 +++ CONFIG_ARCH=native 00:15:40.786 +++ CONFIG_HAVE_EVP_MAC=y 00:15:40.786 +++ CONFIG_URING_ZNS=n 00:15:40.786 +++ CONFIG_WERROR=y 00:15:40.786 +++ CONFIG_HAVE_LIBBSD=n 00:15:40.786 +++ CONFIG_UBSAN=n 00:15:40.786 +++ CONFIG_IPSEC_MB_DIR= 00:15:40.786 +++ CONFIG_GOLANG=n 00:15:40.786 +++ CONFIG_ISAL=y 00:15:40.786 +++ CONFIG_IDXD_KERNEL=n 00:15:40.786 +++ CONFIG_DPDK_LIB_DIR= 00:15:40.786 +++ CONFIG_RDMA_PROV=verbs 00:15:40.786 +++ CONFIG_APPS=y 00:15:40.786 +++ CONFIG_SHARED=n 00:15:40.786 +++ CONFIG_HAVE_KEYUTILS=n 00:15:40.786 +++ CONFIG_FC_PATH= 00:15:40.786 +++ CONFIG_DPDK_PKG_CONFIG=n 00:15:40.786 +++ CONFIG_FC=n 00:15:40.786 +++ CONFIG_AVAHI=n 00:15:40.786 +++ CONFIG_FIO_PLUGIN=y 00:15:40.786 +++ CONFIG_RAID5F=n 00:15:40.786 +++ CONFIG_EXAMPLES=y 00:15:40.786 +++ CONFIG_TESTS=y 00:15:40.786 +++ CONFIG_CRYPTO_MLX5=n 00:15:40.786 +++ CONFIG_MAX_LCORES= 00:15:40.786 +++ CONFIG_IPSEC_MB=n 00:15:40.786 +++ CONFIG_PGO_DIR= 00:15:40.786 +++ CONFIG_DEBUG=y 00:15:40.786 +++ CONFIG_DPDK_COMPRESSDEV=n 00:15:40.786 +++ CONFIG_CROSS_PREFIX= 00:15:40.786 +++ CONFIG_URING=n 00:15:40.786 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:40.786 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:40.786 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:15:40.786 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:15:40.786 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:15:40.786 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:15:40.786 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:15:40.786 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:15:40.786 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:40.786 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:40.786 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:40.786 +++ VHOST_APP=("$_app_dir/vhost") 00:15:40.786 +++ DD_APP=("$_app_dir/spdk_dd") 00:15:40.786 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:15:40.786 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:40.786 +++ [[ #ifndef SPDK_CONFIG_H 00:15:40.786 #define SPDK_CONFIG_H 00:15:40.786 #define SPDK_CONFIG_APPS 1 00:15:40.786 #define SPDK_CONFIG_ARCH native 00:15:40.786 #undef SPDK_CONFIG_ASAN 00:15:40.786 #undef SPDK_CONFIG_AVAHI 00:15:40.786 #undef SPDK_CONFIG_CET 00:15:40.786 #undef SPDK_CONFIG_COVERAGE 00:15:40.786 #define SPDK_CONFIG_CROSS_PREFIX 00:15:40.786 #undef SPDK_CONFIG_CRYPTO 00:15:40.786 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:40.786 #undef SPDK_CONFIG_CUSTOMOCF 00:15:40.786 #undef SPDK_CONFIG_DAOS 00:15:40.786 #define SPDK_CONFIG_DAOS_DIR 00:15:40.786 #define SPDK_CONFIG_DEBUG 1 00:15:40.786 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:40.786 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:40.786 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:40.786 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:40.786 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:40.786 #undef SPDK_CONFIG_DPDK_UADK 00:15:40.786 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:40.786 #define SPDK_CONFIG_EXAMPLES 1 00:15:40.786 #undef SPDK_CONFIG_FC 00:15:40.786 #define SPDK_CONFIG_FC_PATH 00:15:40.786 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:40.786 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:40.786 #undef SPDK_CONFIG_FUSE 00:15:40.786 #undef SPDK_CONFIG_FUZZER 00:15:40.786 #define SPDK_CONFIG_FUZZER_LIB 00:15:40.786 #undef SPDK_CONFIG_GOLANG 00:15:40.786 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:40.786 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:40.786 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:40.786 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:15:40.786 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:40.786 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:40.786 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:15:40.786 #define SPDK_CONFIG_IDXD 1 00:15:40.786 #undef SPDK_CONFIG_IDXD_KERNEL 00:15:40.786 #undef SPDK_CONFIG_IPSEC_MB 00:15:40.786 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:40.786 #define SPDK_CONFIG_ISAL 1 00:15:40.786 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:40.786 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:15:40.786 #define SPDK_CONFIG_LIBDIR 00:15:40.786 #undef SPDK_CONFIG_LTO 00:15:40.786 #define SPDK_CONFIG_MAX_LCORES 00:15:40.786 #undef SPDK_CONFIG_NVME_CUSE 00:15:40.786 #undef SPDK_CONFIG_OCF 00:15:40.786 #define SPDK_CONFIG_OCF_PATH 00:15:40.786 #define SPDK_CONFIG_OPENSSL_PATH 00:15:40.786 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:40.786 #define SPDK_CONFIG_PGO_DIR 00:15:40.786 #undef SPDK_CONFIG_PGO_USE 00:15:40.786 #define SPDK_CONFIG_PREFIX /usr/local 00:15:40.786 #undef SPDK_CONFIG_RAID5F 00:15:40.786 #undef SPDK_CONFIG_RBD 00:15:40.786 #define SPDK_CONFIG_RDMA 1 00:15:40.786 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:40.786 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:40.786 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:15:40.786 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:40.786 #undef SPDK_CONFIG_SHARED 00:15:40.786 #undef SPDK_CONFIG_SMA 00:15:40.786 #define SPDK_CONFIG_TESTS 1 00:15:40.786 #undef SPDK_CONFIG_TSAN 00:15:40.786 #undef SPDK_CONFIG_UBLK 00:15:40.786 #undef SPDK_CONFIG_UBSAN 00:15:40.786 #define SPDK_CONFIG_UNIT_TESTS 1 00:15:40.786 #undef SPDK_CONFIG_URING 00:15:40.786 #define SPDK_CONFIG_URING_PATH 00:15:40.786 #undef SPDK_CONFIG_URING_ZNS 00:15:40.786 #undef SPDK_CONFIG_USDT 00:15:40.786 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:40.786 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:40.786 #undef SPDK_CONFIG_VFIO_USER 00:15:40.786 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:40.786 #undef SPDK_CONFIG_VHOST 00:15:40.786 #undef SPDK_CONFIG_VIRTIO 00:15:40.786 #undef SPDK_CONFIG_VTUNE 00:15:40.786 #define SPDK_CONFIG_VTUNE_DIR 00:15:40.786 #define SPDK_CONFIG_WERROR 1 00:15:40.786 #define SPDK_CONFIG_WPDK_DIR 00:15:40.786 #undef SPDK_CONFIG_XNVME 00:15:40.786 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:40.786 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:40.786 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.786 +++ [[ -e /bin/wpdk_common.sh ]] 00:15:40.786 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.786 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.786 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:15:40.786 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:15:40.786 ++++ export PATH 00:15:40.786 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:15:40.786 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:40.786 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:40.786 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:40.786 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:40.786 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:40.786 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:15:40.786 +++ TEST_TAG=N/A 00:15:40.786 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:40.786 +++ PM_OUTPUTDIR=/usr/home/vagrant/spdk_repo/spdk/../output/power 00:15:40.786 ++++ uname -s 00:15:40.786 +++ PM_OS=FreeBSD 00:15:40.786 +++ MONITOR_RESOURCES_SUDO=() 00:15:40.786 +++ declare -A MONITOR_RESOURCES_SUDO 00:15:40.786 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:40.786 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:40.786 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:40.786 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:40.786 +++ SUDO[0]= 00:15:40.786 +++ SUDO[1]='sudo -E' 00:15:40.786 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:40.786 +++ [[ FreeBSD == FreeBSD ]] 00:15:40.786 +++ MONITOR_RESOURCES=(collect-vmstat) 00:15:40.786 +++ [[ ! -d /usr/home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:40.786 ++ : 0 00:15:40.786 ++ export RUN_NIGHTLY 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_RUN_VALGRIND 00:15:40.786 ++ : 1 00:15:40.786 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:15:40.786 ++ : 1 00:15:40.786 ++ export SPDK_TEST_UNITTEST 00:15:40.786 ++ : 00:15:40.786 ++ export SPDK_TEST_AUTOBUILD 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_RELEASE_BUILD 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_ISAL 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_ISCSI 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_ISCSI_INITIATOR 00:15:40.786 ++ : 1 00:15:40.786 ++ export SPDK_TEST_NVME 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_NVME_PMR 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_NVME_BP 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_NVME_CLI 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_NVME_CUSE 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_NVME_FDP 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_NVMF 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_VFIOUSER 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_VFIOUSER_QEMU 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_FUZZER 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_FUZZER_SHORT 00:15:40.786 ++ : rdma 00:15:40.786 ++ export SPDK_TEST_NVMF_TRANSPORT 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_RBD 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_VHOST 00:15:40.786 ++ : 1 00:15:40.786 ++ export SPDK_TEST_BLOCKDEV 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_IOAT 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_BLOBFS 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_VHOST_INIT 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_LVOL 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_TEST_VBDEV_COMPRESS 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_RUN_ASAN 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_RUN_UBSAN 00:15:40.786 ++ : 00:15:40.786 ++ export SPDK_RUN_EXTERNAL_DPDK 00:15:40.786 ++ : 0 00:15:40.786 ++ export SPDK_RUN_NON_ROOT 00:15:40.786 ++ : 0 00:15:40.787 ++ export SPDK_TEST_CRYPTO 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_FTL 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_OCF 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_VMD 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_OPAL 00:15:40.787 ++ : 00:15:40.787 ++ export SPDK_TEST_NATIVE_DPDK 00:15:40.787 ++ : true 00:15:40.787 ++ export SPDK_AUTOTEST_X 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_RAID5 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_URING 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_USDT 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_USE_IGB_UIO 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_SCHEDULER 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_SCANBUILD 00:15:40.787 ++ : 00:15:40.787 ++ export SPDK_TEST_NVMF_NICS 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_SMA 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_DAOS 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_XNVME 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_ACCEL_DSA 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_ACCEL_IAA 00:15:40.787 ++ : 00:15:40.787 ++ export SPDK_TEST_FUZZER_TARGET 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_TEST_NVMF_MDNS 00:15:40.787 ++ : 0 00:15:40.787 ++ export SPDK_JSONRPC_GO_CLIENT 00:15:40.787 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:15:40.787 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:15:40.787 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:40.787 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:40.787 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:40.787 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:40.787 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:40.787 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:40.787 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:40.787 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:15:40.787 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:15:40.787 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:15:40.787 ++ export PYTHONDONTWRITEBYTECODE=1 00:15:40.787 ++ PYTHONDONTWRITEBYTECODE=1 00:15:40.787 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:40.787 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:40.787 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:40.787 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:40.787 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:15:40.787 ++ rm -rf /var/tmp/asan_suppression_file 00:15:40.787 ++ cat 00:15:40.787 ++ echo leak:libfuse3.so 00:15:40.787 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:40.787 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:40.787 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:40.787 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:40.787 ++ '[' -z /var/spdk/dependencies ']' 00:15:40.787 ++ export DEPENDENCY_DIR 00:15:40.787 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:15:40.787 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:15:40.787 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:15:40.787 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:15:40.787 ++ export QEMU_BIN= 00:15:40.787 ++ QEMU_BIN= 00:15:40.787 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:40.787 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:15:40.787 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:40.787 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:40.787 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:40.787 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:40.787 ++ '[' 0 -eq 0 ']' 00:15:40.787 ++ export valgrind= 00:15:40.787 ++ valgrind= 00:15:40.787 +++ uname -s 00:15:40.787 ++ '[' FreeBSD = Linux ']' 00:15:40.787 +++ uname -s 00:15:40.787 ++ '[' FreeBSD = FreeBSD ']' 00:15:40.787 ++ MAKE=gmake 00:15:40.787 +++ sysctl -a 00:15:40.787 +++ grep -E -i hw.ncpu 00:15:40.787 +++ awk '{print $2}' 00:15:41.045 ++ MAKEFLAGS=-j10 00:15:41.045 ++ HUGEMEM=2048 00:15:41.045 ++ export HUGEMEM=2048 00:15:41.045 ++ HUGEMEM=2048 00:15:41.045 ++ NO_HUGE=() 00:15:41.045 ++ TEST_MODE= 00:15:41.045 ++ [[ -z '' ]] 00:15:41.045 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:15:41.045 ++ exec 00:15:41.045 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:15:41.045 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:15:41.045 ++ set_test_storage 2147483648 00:15:41.045 ++ [[ -v testdir ]] 00:15:41.045 ++ local requested_size=2147483648 00:15:41.045 ++ local mount target_dir 00:15:41.045 ++ local -A mounts fss sizes avails uses 00:15:41.045 ++ local source fs size avail mount use 00:15:41.045 ++ local storage_fallback storage_candidates 00:15:41.045 +++ mktemp -udt spdk.XXXXXX 00:15:41.046 ++ storage_fallback=/tmp/spdk.XXXXXX.3jVcfIHs 00:15:41.046 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:41.046 ++ [[ -n '' ]] 00:15:41.046 ++ [[ -n '' ]] 00:15:41.046 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.3jVcfIHs/tests/unit /tmp/spdk.XXXXXX.3jVcfIHs 00:15:41.046 ++ requested_size=2214592512 00:15:41.046 ++ read -r source fs size use avail _ mount 00:15:41.046 +++ df -T 00:15:41.046 +++ grep -v Filesystem 00:15:41.046 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:15:41.046 ++ fss["$mount"]=ufs 00:15:41.046 ++ avails["$mount"]=17228279808 00:15:41.046 ++ sizes["$mount"]=31182712832 00:15:41.046 ++ uses["$mount"]=11459817472 00:15:41.046 ++ read -r source fs size use avail _ mount 00:15:41.046 ++ mounts["$mount"]=devfs 00:15:41.046 ++ fss["$mount"]=devfs 00:15:41.046 ++ avails["$mount"]=0 00:15:41.046 ++ sizes["$mount"]=1024 00:15:41.046 ++ uses["$mount"]=1024 00:15:41.046 ++ read -r source fs size use avail _ mount 00:15:41.046 ++ mounts["$mount"]=tmpfs 00:15:41.046 ++ fss["$mount"]=tmpfs 00:15:41.046 ++ avails["$mount"]=2147442688 00:15:41.046 ++ sizes["$mount"]=2147483648 00:15:41.046 ++ uses["$mount"]=40960 00:15:41.046 ++ read -r source fs size use avail _ mount 00:15:41.046 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_3/freebsd13-libvirt/output 00:15:41.046 ++ fss["$mount"]=fusefs.sshfs 00:15:41.046 ++ avails["$mount"]=90781261824 00:15:41.046 ++ sizes["$mount"]=105088212992 00:15:41.046 ++ uses["$mount"]=8921518080 00:15:41.046 ++ read -r source fs size use avail _ mount 00:15:41.046 ++ printf '* Looking for test storage...\n' 00:15:41.046 * Looking for test storage... 00:15:41.046 ++ local target_space new_size 00:15:41.046 ++ for target_dir in "${storage_candidates[@]}" 00:15:41.046 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:41.046 +++ awk '$1 !~ /Filesystem/{print $6}' 00:15:41.046 ++ mount=/ 00:15:41.046 ++ target_space=17228279808 00:15:41.046 ++ (( target_space == 0 || target_space < requested_size )) 00:15:41.046 ++ (( target_space >= requested_size )) 00:15:41.046 ++ [[ ufs == tmpfs ]] 00:15:41.046 ++ [[ ufs == ramfs ]] 00:15:41.046 ++ [[ / == / ]] 00:15:41.046 ++ new_size=13674409984 00:15:41.046 ++ (( new_size * 100 / sizes[/] > 95 )) 00:15:41.046 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:41.046 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:41.046 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:41.046 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:15:41.046 ++ return 0 00:15:41.046 ++ set -o errtrace 00:15:41.046 ++ shopt -s extdebug 00:15:41.046 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:15:41.046 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1683 -- # true 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@29 -- # exec 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@18 -- # set -x 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@152 -- # '[' 0 -eq 1 ']' 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@159 -- # '[' -z x ']' 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@166 -- # '[' 0 -eq 1 ']' 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@179 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@179 -- # CC_TYPE=CC_TYPE=clang 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@180 -- # hash lcov 00:15:41.046 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 180: hash: lcov: not found 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@183 -- # cov_avail=no 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@185 -- # '[' no = yes ']' 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@207 -- # uname -m 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@207 -- # '[' amd64 = aarch64 ']' 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@211 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:41.046 ************************************ 00:15:41.046 START TEST unittest_pci_event 00:15:41.046 ************************************ 00:15:41.046 07:28:34 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:15:41.046 00:15:41.046 00:15:41.046 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.046 http://cunit.sourceforge.net/ 00:15:41.046 00:15:41.046 00:15:41.046 Suite: pci_event 00:15:41.046 Test: test_pci_parse_event ...passed 00:15:41.046 00:15:41.046 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.046 suites 1 1 n/a 0 0 00:15:41.046 tests 1 1 1 0 0 00:15:41.046 asserts 1 1 1 0 n/a 00:15:41.046 00:15:41.046 Elapsed time = 0.000 seconds 00:15:41.046 00:15:41.046 real 0m0.024s 00:15:41.046 user 0m0.004s 00:15:41.046 sys 0m0.008s 00:15:41.046 07:28:34 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.046 07:28:34 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:15:41.046 ************************************ 00:15:41.046 END TEST unittest_pci_event 00:15:41.046 ************************************ 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@212 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:41.046 ************************************ 00:15:41.046 START TEST unittest_include 00:15:41.046 ************************************ 00:15:41.046 07:28:34 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:15:41.046 00:15:41.046 00:15:41.046 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.046 http://cunit.sourceforge.net/ 00:15:41.046 00:15:41.046 00:15:41.046 Suite: histogram 00:15:41.046 Test: histogram_test ...passed 00:15:41.046 Test: histogram_merge ...passed 00:15:41.046 00:15:41.046 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.046 suites 1 1 n/a 0 0 00:15:41.046 tests 2 2 2 0 0 00:15:41.046 asserts 50 50 50 0 n/a 00:15:41.046 00:15:41.046 Elapsed time = 0.000 seconds 00:15:41.046 00:15:41.046 real 0m0.007s 00:15:41.046 user 0m0.000s 00:15:41.046 sys 0m0.008s 00:15:41.046 07:28:34 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.046 07:28:34 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:15:41.046 ************************************ 00:15:41.046 END TEST unittest_include 00:15:41.046 ************************************ 00:15:41.046 07:28:34 unittest -- unit/unittest.sh@213 -- # run_test unittest_bdev unittest_bdev 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.046 07:28:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:41.046 ************************************ 00:15:41.046 START TEST unittest_bdev 00:15:41.046 ************************************ 00:15:41.046 07:28:34 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:15:41.046 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:15:41.046 00:15:41.046 00:15:41.046 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.046 http://cunit.sourceforge.net/ 00:15:41.046 00:15:41.046 00:15:41.046 Suite: bdev 00:15:41.046 Test: bytes_to_blocks_test ...passed 00:15:41.046 Test: num_blocks_test ...passed 00:15:41.046 Test: io_valid_test ...passed 00:15:41.046 Test: open_write_test ...[2024-05-16 07:28:34.563345] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:15:41.046 [2024-05-16 07:28:34.563678] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:15:41.046 [2024-05-16 07:28:34.563716] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:15:41.046 passed 00:15:41.046 Test: claim_test ...passed 00:15:41.046 Test: alias_add_del_test ...[2024-05-16 07:28:34.569017] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:15:41.046 [2024-05-16 07:28:34.569095] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4605:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:15:41.046 passed 00:15:41.046 Test: get_device_stat_test ...passed 00:15:41.046 Test: bdev_io_types_test ...[2024-05-16 07:28:34.569448] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:15:41.046 passed 00:15:41.046 Test: bdev_io_wait_test ...passed 00:15:41.046 Test: bdev_io_spans_split_test ...passed 00:15:41.046 Test: bdev_io_boundary_split_test ...passed 00:15:41.046 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-16 07:28:34.574366] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:15:41.046 passed 00:15:41.046 Test: bdev_io_mix_split_test ...passed 00:15:41.046 Test: bdev_io_split_with_io_wait ...passed 00:15:41.046 Test: bdev_io_write_unit_split_test ...[2024-05-16 07:28:34.578334] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:15:41.046 [2024-05-16 07:28:34.578368] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:15:41.046 [2024-05-16 07:28:34.578377] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:15:41.046 [2024-05-16 07:28:34.578387] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:15:41.046 passed 00:15:41.047 Test: bdev_io_alignment_with_boundary ...passed 00:15:41.047 Test: bdev_io_alignment ...passed 00:15:41.047 Test: bdev_histograms ...passed 00:15:41.047 Test: bdev_write_zeroes ...passed 00:15:41.047 Test: bdev_compare_and_write ...passed 00:15:41.047 Test: bdev_compare ...passed 00:15:41.047 Test: bdev_compare_emulated ...passed 00:15:41.047 Test: bdev_zcopy_write ...passed 00:15:41.047 Test: bdev_zcopy_read ...passed 00:15:41.047 Test: bdev_open_while_hotremove ...passed 00:15:41.047 Test: bdev_close_while_hotremove ...passed 00:15:41.047 Test: bdev_open_ext_test ...passed 00:15:41.047 Test: bdev_open_ext_unregister ...passed 00:15:41.047 Test: bdev_set_io_timeout ...[2024-05-16 07:28:34.593937] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:15:41.047 [2024-05-16 07:28:34.593999] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8136:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:15:41.047 passed 00:15:41.047 Test: bdev_set_qd_sampling ...passed 00:15:41.047 Test: lba_range_overlap ...passed 00:15:41.047 Test: lock_lba_range_check_ranges ...passed 00:15:41.047 Test: lock_lba_range_with_io_outstanding ...passed 00:15:41.047 Test: lock_lba_range_overlapped ...passed 00:15:41.047 Test: bdev_quiesce ...[2024-05-16 07:28:34.601796] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10059:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:15:41.047 passed 00:15:41.047 Test: bdev_io_abort ...passed 00:15:41.047 Test: bdev_unmap ...passed 00:15:41.047 Test: bdev_write_zeroes_split_test ...passed 00:15:41.047 Test: bdev_set_options_test ...[2024-05-16 07:28:34.607309] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:15:41.047 passed 00:15:41.047 Test: bdev_get_memory_domains ...passed 00:15:41.047 Test: bdev_io_ext ...passed 00:15:41.047 Test: bdev_io_ext_no_opts ...passed 00:15:41.047 Test: bdev_io_ext_invalid_opts ...passed 00:15:41.306 Test: bdev_io_ext_split ...passed 00:15:41.306 Test: bdev_io_ext_bounce_buffer ...passed 00:15:41.306 Test: bdev_register_uuid_alias ...[2024-05-16 07:28:34.615758] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name e756f1ce-1355-11ef-8e8f-9dd684e56d79 already exists 00:15:41.306 [2024-05-16 07:28:34.615790] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:e756f1ce-1355-11ef-8e8f-9dd684e56d79 alias for bdev bdev0 00:15:41.306 passed 00:15:41.306 Test: bdev_unregister_by_name ...[2024-05-16 07:28:34.616094] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7926:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:15:41.306 [2024-05-16 07:28:34.616105] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:15:41.306 passed 00:15:41.306 Test: for_each_bdev_test ...passed 00:15:41.306 Test: bdev_seek_test ...passed 00:15:41.306 Test: bdev_copy ...passed 00:15:41.306 Test: bdev_copy_split_test ...passed 00:15:41.306 Test: examine_locks ...passed 00:15:41.306 Test: claim_v2_rwo ...[2024-05-16 07:28:34.620665] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:41.306 [2024-05-16 07:28:34.620811] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:41.306 [2024-05-16 07:28:34.620833] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620841] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620864] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620873] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8656:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:15:41.307 passed 00:15:41.307 Test: claim_v2_rom ...passed 00:15:41.307 Test: claim_v2_rwm ...[2024-05-16 07:28:34.620896] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620911] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620918] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620928] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620936] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:15:41.307 [2024-05-16 07:28:34.620944] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:15:41.307 [2024-05-16 07:28:34.620959] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8729:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:15:41.307 passed 00:15:41.307 Test: claim_v2_existing_writer ...passed 00:15:41.307 Test: claim_v2_existing_v1 ...passed 00:15:41.307 Test: claim_v1_existing_v2 ...[2024-05-16 07:28:34.620967] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8030:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620974] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620981] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620987] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.620993] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8748:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.621002] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8729:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:15:41.307 [2024-05-16 07:28:34.621019] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:15:41.307 [2024-05-16 07:28:34.621026] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8694:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:15:41.307 [2024-05-16 07:28:34.621042] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.621049] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.621056] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.621071] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.621079] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:15:41.307 passed 00:15:41.307 Test: examine_claimed ...passed 00:15:41.307 00:15:41.307 [2024-05-16 07:28:34.621086] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8497:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:15:41.307 [2024-05-16 07:28:34.621115] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8825:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:15:41.307 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.307 suites 1 1 n/a 0 0 00:15:41.307 tests 59 59 59 0 0 00:15:41.307 asserts 4599 4599 4599 0 n/a 00:15:41.307 00:15:41.307 Elapsed time = 0.070 seconds 00:15:41.307 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:15:41.307 00:15:41.307 00:15:41.307 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.307 http://cunit.sourceforge.net/ 00:15:41.307 00:15:41.307 00:15:41.307 Suite: nvme 00:15:41.307 Test: test_create_ctrlr ...passed 00:15:41.307 Test: test_reset_ctrlr ...passed 00:15:41.307 Test: test_race_between_reset_and_destruct_ctrlr ...[2024-05-16 07:28:34.629643] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 passed 00:15:41.307 Test: test_failover_ctrlr ...passed 00:15:41.307 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:15:41.307 Test: test_pending_reset ...[2024-05-16 07:28:34.630011] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.630040] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.630062] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.630227] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 passed 00:15:41.307 Test: test_attach_ctrlr ...[2024-05-16 07:28:34.630260] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.630332] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:15:41.307 passed 00:15:41.307 Test: test_aer_cb ...passed 00:15:41.307 Test: test_submit_nvme_cmd ...passed 00:15:41.307 Test: test_add_remove_trid ...passed 00:15:41.307 Test: test_abort ...passed 00:15:41.307 Test: test_get_io_qpair ...passed 00:15:41.307 Test: test_bdev_unregister ...passed 00:15:41.307 Test: test_compare_ns ...passed 00:15:41.307 Test: test_init_ana_log_page ...[2024-05-16 07:28:34.630584] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7436:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:15:41.307 [2024-05-16 07:28:34.630842] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 passed 00:15:41.307 Test: test_get_memory_domains ...passed 00:15:41.307 Test: test_reconnect_qpair ...passed 00:15:41.307 Test: test_create_bdev_ctrlr ...[2024-05-16 07:28:34.630889] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5362:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:15:41.307 passed 00:15:41.307 Test: test_add_multi_ns_to_bdev ...passed 00:15:41.307 Test: test_add_multi_io_paths_to_nbdev_ch ...[2024-05-16 07:28:34.631020] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4553:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:15:41.307 passed 00:15:41.307 Test: test_admin_path ...passed 00:15:41.307 Test: test_reset_bdev_ctrlr ...passed 00:15:41.307 Test: test_find_io_path ...passed 00:15:41.307 Test: test_retry_io_if_ana_state_is_updating ...passed 00:15:41.307 Test: test_retry_io_for_io_path_error ...passed 00:15:41.307 Test: test_retry_io_count ...passed 00:15:41.307 Test: test_concurrent_read_ana_log_page ...passed 00:15:41.307 Test: test_retry_io_for_ana_error ...passed 00:15:41.307 Test: test_check_io_error_resiliency_params ...[2024-05-16 07:28:34.631631] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6056:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:15:41.307 [2024-05-16 07:28:34.631649] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6060:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:15:41.307 [2024-05-16 07:28:34.631660] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6069:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:15:41.307 [2024-05-16 07:28:34.631670] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6072:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:15:41.307 [2024-05-16 07:28:34.631681] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:15:41.307 [2024-05-16 07:28:34.631692] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:15:41.307 [2024-05-16 07:28:34.631702] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6064:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:15:41.307 passed 00:15:41.307 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:15:41.307 Test: test_reconnect_ctrlr ...passed 00:15:41.307 Test: test_retry_failover_ctrlr ...passed 00:15:41.307 Test: test_fail_path ...[2024-05-16 07:28:34.631712] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6079:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:15:41.307 [2024-05-16 07:28:34.631722] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:15:41.307 [2024-05-16 07:28:34.631797] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.631817] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.631854] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.631871] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.631888] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 [2024-05-16 07:28:34.631935] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.307 passed 00:15:41.307 Test: test_nvme_ns_cmp ...passed 00:15:41.307 Test: test_ana_transition ...passed 00:15:41.307 Test: test_set_preferred_path ...passed 00:15:41.307 Test: test_find_next_io_path ...passed 00:15:41.307 Test: test_find_io_path_min_qd ...passed 00:15:41.307 Test: test_disable_auto_failback ...[2024-05-16 07:28:34.631995] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 [2024-05-16 07:28:34.632015] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 [2024-05-16 07:28:34.632033] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 [2024-05-16 07:28:34.632049] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 [2024-05-16 07:28:34.632064] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 passed 00:15:41.308 Test: test_set_multipath_policy ...passed 00:15:41.308 Test: test_uuid_generation ...[2024-05-16 07:28:34.632224] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 passed 00:15:41.308 Test: test_retry_io_to_same_path ...passed 00:15:41.308 Test: test_race_between_reset_and_disconnected ...passed 00:15:41.308 Test: test_ctrlr_op_rpc ...passed 00:15:41.308 Test: test_bdev_ctrlr_op_rpc ...passed 00:15:41.308 Test: test_disable_enable_ctrlr ...passed 00:15:41.308 Test: test_delete_ctrlr_done ...passed 00:15:41.308 Test: test_ns_remove_during_reset ...passed 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 1 1 n/a 0 0 00:15:41.308 tests 48 48 48 0 0 00:15:41.308 asserts 3565 3565 3565 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.008 seconds 00:15:41.308 [2024-05-16 07:28:34.664464] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 [2024-05-16 07:28:34.664528] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 Test Options 00:15:41.308 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:15:41.308 00:15:41.308 Suite: raid 00:15:41.308 Test: test_create_raid ...passed 00:15:41.308 Test: test_create_raid_superblock ...passed 00:15:41.308 Test: test_delete_raid ...passed 00:15:41.308 Test: test_create_raid_invalid_args ...[2024-05-16 07:28:34.673778] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:15:41.308 [2024-05-16 07:28:34.673963] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:15:41.308 [2024-05-16 07:28:34.674033] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:15:41.308 [2024-05-16 07:28:34.674058] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:15:41.308 [2024-05-16 07:28:34.674068] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:15:41.308 passed 00:15:41.308 Test: test_delete_raid_invalid_args ...passed 00:15:41.308 Test: test_io_channel ...passed 00:15:41.308 Test: test_reset_io ...[2024-05-16 07:28:34.674177] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:15:41.308 [2024-05-16 07:28:34.674186] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:15:41.308 passed 00:15:41.308 Test: test_multi_raid ...passed 00:15:41.308 Test: test_io_type_supported ...passed 00:15:41.308 Test: test_raid_json_dump_info ...passed 00:15:41.308 Test: test_context_size ...passed 00:15:41.308 Test: test_raid_level_conversions ...passed 00:15:41.308 Test: test_raid_io_split ...passed 00:15:41.308 Test: test_raid_process ...passed 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 1 1 n/a 0 0 00:15:41.308 tests 14 14 14 0 0 00:15:41.308 asserts 6183 6183 6183 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.000 seconds 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 00:15:41.308 Suite: raid_sb 00:15:41.308 Test: test_raid_bdev_write_superblock ...passed 00:15:41.308 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:15:41.308 Test: test_raid_bdev_parse_superblock ...[2024-05-16 07:28:34.680742] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:15:41.308 passed 00:15:41.308 Suite: raid_sb_md 00:15:41.308 Test: test_raid_bdev_write_superblock ...passed 00:15:41.308 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:15:41.308 Test: test_raid_bdev_parse_superblock ...passed 00:15:41.308 Suite: raid_sb_md_interleaved 00:15:41.308 Test: test_raid_bdev_write_superblock ...passed 00:15:41.308 Test: test_raid_bdev_load_base_bdev_superblock ...[2024-05-16 07:28:34.680930] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:15:41.308 passed 00:15:41.308 Test: test_raid_bdev_parse_superblock ...passed 00:15:41.308 [2024-05-16 07:28:34.681006] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 3 3 n/a 0 0 00:15:41.308 tests 9 9 9 0 0 00:15:41.308 asserts 139 139 139 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.000 seconds 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 00:15:41.308 Suite: concat 00:15:41.308 Test: test_concat_start ...passed 00:15:41.308 Test: test_concat_rw ...passed 00:15:41.308 Test: test_concat_null_payload ...passed 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 1 1 n/a 0 0 00:15:41.308 tests 3 3 3 0 0 00:15:41.308 asserts 8460 8460 8460 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.000 seconds 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 00:15:41.308 Suite: raid0 00:15:41.308 Test: test_write_io ...passed 00:15:41.308 Test: test_read_io ...passed 00:15:41.308 Test: test_unmap_io ...passed 00:15:41.308 Test: test_io_failure ...passed 00:15:41.308 Suite: raid0_dif 00:15:41.308 Test: test_write_io ...passed 00:15:41.308 Test: test_read_io ...passed 00:15:41.308 Test: test_unmap_io ...passed 00:15:41.308 Test: test_io_failure ...passed 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 2 2 n/a 0 0 00:15:41.308 tests 8 8 8 0 0 00:15:41.308 asserts 368291 368291 368291 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.008 seconds 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 00:15:41.308 Suite: raid1 00:15:41.308 Test: test_raid1_start ...passed 00:15:41.308 Test: test_raid1_read_balancing ...passed 00:15:41.308 Test: test_raid1_write_error ...passed 00:15:41.308 Test: test_raid1_read_error ...passed 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 1 1 n/a 0 0 00:15:41.308 tests 4 4 4 0 0 00:15:41.308 asserts 4374 4374 4374 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.000 seconds 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 00:15:41.308 Suite: zone 00:15:41.308 Test: test_zone_get_operation ...passed 00:15:41.308 Test: test_bdev_zone_get_info ...passed 00:15:41.308 Test: test_bdev_zone_management ...passed 00:15:41.308 Test: test_bdev_zone_append ...passed 00:15:41.308 Test: test_bdev_zone_append_with_md ...passed 00:15:41.308 Test: test_bdev_zone_appendv ...passed 00:15:41.308 Test: test_bdev_zone_appendv_with_md ...passed 00:15:41.308 Test: test_bdev_io_get_append_location ...passed 00:15:41.308 00:15:41.308 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.308 suites 1 1 n/a 0 0 00:15:41.308 tests 8 8 8 0 0 00:15:41.308 asserts 94 94 94 0 n/a 00:15:41.308 00:15:41.308 Elapsed time = 0.000 seconds 00:15:41.308 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:15:41.308 00:15:41.308 00:15:41.308 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.308 http://cunit.sourceforge.net/ 00:15:41.308 00:15:41.308 00:15:41.308 Suite: gpt_parse 00:15:41.308 Test: test_parse_mbr_and_primary ...[2024-05-16 07:28:34.716354] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:41.308 [2024-05-16 07:28:34.716566] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:41.309 [2024-05-16 07:28:34.716594] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:15:41.309 [2024-05-16 07:28:34.716607] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:15:41.309 [2024-05-16 07:28:34.716620] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:15:41.309 [2024-05-16 07:28:34.716631] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:15:41.309 passed 00:15:41.309 Test: test_parse_secondary ...[2024-05-16 07:28:34.716787] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:15:41.309 [2024-05-16 07:28:34.716798] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:15:41.309 [2024-05-16 07:28:34.716810] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:15:41.309 [2024-05-16 07:28:34.716820] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:15:41.309 passed 00:15:41.309 Test: test_check_mbr ...passed 00:15:41.309 Test: test_read_header ...[2024-05-16 07:28:34.716972] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:41.309 [2024-05-16 07:28:34.716983] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:15:41.309 [2024-05-16 07:28:34.716999] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:15:41.309 passed 00:15:41.309 Test: test_read_partitions ...[2024-05-16 07:28:34.717011] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:15:41.309 [2024-05-16 07:28:34.717023] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:15:41.309 [2024-05-16 07:28:34.717035] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:15:41.309 [2024-05-16 07:28:34.717047] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:15:41.309 [2024-05-16 07:28:34.717057] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:15:41.309 [2024-05-16 07:28:34.717073] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:15:41.309 [2024-05-16 07:28:34.717085] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:15:41.309 [2024-05-16 07:28:34.717095] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:15:41.309 [2024-05-16 07:28:34.717105] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:15:41.309 passed 00:15:41.309 00:15:41.309 [2024-05-16 07:28:34.717184] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:15:41.309 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.309 suites 1 1 n/a 0 0 00:15:41.309 tests 5 5 5 0 0 00:15:41.309 asserts 33 33 33 0 n/a 00:15:41.309 00:15:41.309 Elapsed time = 0.000 seconds 00:15:41.309 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:15:41.309 00:15:41.309 00:15:41.309 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.309 http://cunit.sourceforge.net/ 00:15:41.309 00:15:41.309 00:15:41.309 Suite: bdev_part 00:15:41.309 Test: part_test ...[2024-05-16 07:28:34.723662] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4575:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:15:41.309 passed 00:15:41.309 Test: part_free_test ...passed 00:15:41.309 Test: part_get_io_channel_test ...passed 00:15:41.309 Test: part_construct_ext ...passed 00:15:41.309 00:15:41.309 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.309 suites 1 1 n/a 0 0 00:15:41.309 tests 4 4 4 0 0 00:15:41.309 asserts 48 48 48 0 n/a 00:15:41.309 00:15:41.309 Elapsed time = 0.008 seconds 00:15:41.309 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:15:41.309 00:15:41.309 00:15:41.309 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.309 http://cunit.sourceforge.net/ 00:15:41.309 00:15:41.309 00:15:41.309 Suite: scsi_nvme_suite 00:15:41.309 Test: scsi_nvme_translate_test ...passed 00:15:41.309 00:15:41.309 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.309 suites 1 1 n/a 0 0 00:15:41.309 tests 1 1 1 0 0 00:15:41.309 asserts 104 104 104 0 n/a 00:15:41.309 00:15:41.309 Elapsed time = 0.000 seconds 00:15:41.309 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:15:41.309 00:15:41.309 00:15:41.309 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.309 http://cunit.sourceforge.net/ 00:15:41.309 00:15:41.309 00:15:41.309 Suite: lvol 00:15:41.309 Test: ut_lvs_init ...[2024-05-16 07:28:34.737499] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:15:41.309 passed 00:15:41.309 Test: ut_lvol_init ...[2024-05-16 07:28:34.737926] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:15:41.309 passed 00:15:41.309 Test: ut_lvol_snapshot ...passed 00:15:41.309 Test: ut_lvol_clone ...passed 00:15:41.309 Test: ut_lvs_destroy ...passed 00:15:41.309 Test: ut_lvs_unload ...passed 00:15:41.309 Test: ut_lvol_resize ...passed 00:15:41.309 Test: ut_lvol_set_read_only ...passed 00:15:41.309 Test: ut_lvol_hotremove ...passed 00:15:41.309 Test: ut_vbdev_lvol_get_io_channel ...passed 00:15:41.309 Test: ut_vbdev_lvol_io_type_supported ...passed 00:15:41.309 Test: ut_lvol_read_write ...passed 00:15:41.309 Test: ut_vbdev_lvol_submit_request ...passed 00:15:41.309 Test: ut_lvol_examine_config ...passed 00:15:41.309 Test: ut_lvol_examine_disk ...[2024-05-16 07:28:34.738110] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:15:41.309 [2024-05-16 07:28:34.738232] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:15:41.309 passed 00:15:41.309 Test: ut_lvol_rename ...passed 00:15:41.309 Test: ut_bdev_finish ...passed 00:15:41.309 Test: ut_lvs_rename ...passed 00:15:41.309 Test: ut_lvol_seek ...passed 00:15:41.309 Test: ut_esnap_dev_create ...passed 00:15:41.309 Test: ut_lvol_esnap_clone_bad_args ...passed 00:15:41.309 Test: ut_lvol_shallow_copy ...passed 00:15:41.309 Test: ut_lvol_set_external_parent ...passed 00:15:41.309 00:15:41.309 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.309 suites 1 1 n/a 0 0 00:15:41.309 tests 23 23 23 0 0 00:15:41.309 asserts 798 798 798 0 n/a 00:15:41.309 00:15:41.309 Elapsed time = 0.000 seconds 00:15:41.309 [2024-05-16 07:28:34.738290] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:15:41.309 [2024-05-16 07:28:34.738306] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:15:41.309 [2024-05-16 07:28:34.738361] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:15:41.309 [2024-05-16 07:28:34.738377] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:15:41.309 [2024-05-16 07:28:34.738390] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:15:41.309 [2024-05-16 07:28:34.738418] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:15:41.309 [2024-05-16 07:28:34.738447] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:15:41.309 [2024-05-16 07:28:34.738461] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:15:41.309 [2024-05-16 07:28:34.738493] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:15:41.309 [2024-05-16 07:28:34.738506] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:15:41.309 [2024-05-16 07:28:34.738527] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:15:41.309 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:15:41.309 00:15:41.309 00:15:41.309 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.309 http://cunit.sourceforge.net/ 00:15:41.309 00:15:41.309 00:15:41.309 Suite: zone_block 00:15:41.309 Test: test_zone_block_create ...passed 00:15:41.309 Test: test_zone_block_create_invalid ...[2024-05-16 07:28:34.751909] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:15:41.309 passed 00:15:41.309 Test: test_get_zone_info ...passed 00:15:41.309 Test: test_supported_io_types ...passed 00:15:41.309 Test: test_reset_zone ...[2024-05-16 07:28:34.752121] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-16 07:28:34.752144] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:15:41.309 [2024-05-16 07:28:34.752159] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-16 07:28:34.752174] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:15:41.309 [2024-05-16 07:28:34.752186] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-16 07:28:34.752199] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:15:41.309 [2024-05-16 07:28:34.752210] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-16 07:28:34.752282] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752303] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752318] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 passed 00:15:41.310 Test: test_open_zone ...[2024-05-16 07:28:34.752382] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752398] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752444] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 passed 00:15:41.310 Test: test_zone_write ...[2024-05-16 07:28:34.752695] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752710] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752753] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:15:41.310 [2024-05-16 07:28:34.752766] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.752781] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:15:41.310 [2024-05-16 07:28:34.752793] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.753377] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:15:41.310 [2024-05-16 07:28:34.753401] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.753416] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:15:41.310 [2024-05-16 07:28:34.753427] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754093] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:15:41.310 [2024-05-16 07:28:34.754113] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 passed 00:15:41.310 Test: test_zone_read ...passed 00:15:41.310 Test: test_close_zone ...[2024-05-16 07:28:34.754152] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:15:41.310 [2024-05-16 07:28:34.754165] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754181] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:15:41.310 [2024-05-16 07:28:34.754192] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754253] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:15:41.310 [2024-05-16 07:28:34.754265] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 passed 00:15:41.310 Test: test_finish_zone ...passed 00:15:41.310 Test: test_append_zone ...[2024-05-16 07:28:34.754313] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754332] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754377] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754392] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754458] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754473] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754508] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:15:41.310 [2024-05-16 07:28:34.754520] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 [2024-05-16 07:28:34.754535] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:15:41.310 [2024-05-16 07:28:34.754547] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 passed 00:15:41.310 00:15:41.310 [2024-05-16 07:28:34.755713] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:15:41.310 [2024-05-16 07:28:34.755728] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:15:41.310 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.310 suites 1 1 n/a 0 0 00:15:41.310 tests 11 11 11 0 0 00:15:41.310 asserts 3437 3437 3437 0 n/a 00:15:41.310 00:15:41.310 Elapsed time = 0.000 seconds 00:15:41.310 07:28:34 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:15:41.310 00:15:41.310 00:15:41.310 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.310 http://cunit.sourceforge.net/ 00:15:41.310 00:15:41.310 00:15:41.310 Suite: bdev 00:15:41.310 Test: basic ...[2024-05-16 07:28:34.766684] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248db9): Operation not permitted (rc=-1) 00:15:41.310 [2024-05-16 07:28:34.766951] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82d0b5480 (0x248db0): Operation not permitted (rc=-1) 00:15:41.310 [2024-05-16 07:28:34.766987] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248db9): Operation not permitted (rc=-1) 00:15:41.310 passed 00:15:41.310 Test: unregister_and_close ...passed 00:15:41.310 Test: unregister_and_close_different_threads ...passed 00:15:41.310 Test: basic_qos ...passed 00:15:41.310 Test: put_channel_during_reset ...passed 00:15:41.310 Test: aborted_reset ...passed 00:15:41.310 Test: aborted_reset_no_outstanding_io ...passed 00:15:41.310 Test: io_during_reset ...passed 00:15:41.310 Test: reset_completions ...passed 00:15:41.310 Test: io_during_qos_queue ...passed 00:15:41.310 Test: io_during_qos_reset ...passed 00:15:41.310 Test: enomem ...passed 00:15:41.310 Test: enomem_multi_bdev ...passed 00:15:41.310 Test: enomem_multi_bdev_unregister ...passed 00:15:41.310 Test: enomem_multi_io_target ...passed 00:15:41.310 Test: qos_dynamic_enable ...passed 00:15:41.310 Test: bdev_histograms_mt ...passed 00:15:41.310 Test: bdev_set_io_timeout_mt ...passed 00:15:41.310 Test: lock_lba_range_then_submit_io ...[2024-05-16 07:28:34.800510] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x82d0b5600 not unregistered 00:15:41.310 [2024-05-16 07:28:34.801447] thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x248d98 already registered (old:0x82d0b5600 new:0x82d0b5780) 00:15:41.310 passed 00:15:41.310 Test: unregister_during_reset ...passed 00:15:41.310 Test: event_notify_and_close ...passed 00:15:41.310 Suite: bdev_wrong_thread 00:15:41.310 Test: spdk_bdev_register_wt ...[2024-05-16 07:28:34.804854] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x82d07e700 (0x82d07e700) 00:15:41.310 passed 00:15:41.310 Test: spdk_bdev_examine_wt ...passed[2024-05-16 07:28:34.804893] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82d07e700 (0x82d07e700) 00:15:41.310 00:15:41.310 00:15:41.310 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.310 suites 2 2 n/a 0 0 00:15:41.310 tests 23 23 23 0 0 00:15:41.310 asserts 601 601 601 0 n/a 00:15:41.310 00:15:41.310 Elapsed time = 0.047 seconds 00:15:41.310 00:15:41.310 real 0m0.257s 00:15:41.310 user 0m0.182s 00:15:41.310 sys 0m0.060s 00:15:41.310 07:28:34 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.310 07:28:34 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:15:41.310 ************************************ 00:15:41.310 END TEST unittest_bdev 00:15:41.310 ************************************ 00:15:41.310 07:28:34 unittest -- unit/unittest.sh@214 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:41.310 07:28:34 unittest -- unit/unittest.sh@219 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:41.310 07:28:34 unittest -- unit/unittest.sh@224 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:41.310 07:28:34 unittest -- unit/unittest.sh@228 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:41.310 07:28:34 unittest -- unit/unittest.sh@232 -- # run_test unittest_blob_blobfs unittest_blob 00:15:41.310 07:28:34 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:41.310 07:28:34 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.310 07:28:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:41.310 ************************************ 00:15:41.310 START TEST unittest_blob_blobfs 00:15:41.310 ************************************ 00:15:41.310 07:28:34 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:15:41.310 07:28:34 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:15:41.310 07:28:34 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:15:41.310 00:15:41.310 00:15:41.310 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.310 http://cunit.sourceforge.net/ 00:15:41.310 00:15:41.310 00:15:41.310 Suite: blob_nocopy_noextent 00:15:41.310 Test: blob_init ...[2024-05-16 07:28:34.862529] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:41.569 passed 00:15:41.569 Test: blob_thin_provision ...passed 00:15:41.569 Test: blob_read_only ...passed 00:15:41.569 Test: bs_load ...[2024-05-16 07:28:34.932970] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:41.569 passed 00:15:41.569 Test: bs_load_custom_cluster_size ...passed 00:15:41.569 Test: bs_load_after_failed_grow ...passed 00:15:41.569 Test: bs_cluster_sz ...[2024-05-16 07:28:34.952804] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:41.569 [2024-05-16 07:28:34.952853] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:41.569 [2024-05-16 07:28:34.952866] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:41.569 passed 00:15:41.569 Test: bs_resize_md ...passed 00:15:41.569 Test: bs_destroy ...passed 00:15:41.569 Test: bs_type ...passed 00:15:41.569 Test: bs_super_block ...passed 00:15:41.569 Test: bs_test_recover_cluster_count ...passed 00:15:41.569 Test: bs_grow_live ...passed 00:15:41.569 Test: bs_grow_live_no_space ...passed 00:15:41.569 Test: bs_test_grow ...passed 00:15:41.569 Test: blob_serialize_test ...passed 00:15:41.569 Test: super_block_crc ...passed 00:15:41.569 Test: blob_thin_prov_write_count_io ...passed 00:15:41.569 Test: blob_thin_prov_unmap_cluster ...passed 00:15:41.569 Test: bs_load_iter_test ...passed 00:15:41.569 Test: blob_relations ...[2024-05-16 07:28:35.097100] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:41.569 [2024-05-16 07:28:35.097138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.569 [2024-05-16 07:28:35.097221] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:41.569 [2024-05-16 07:28:35.097231] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.569 passed 00:15:41.569 Test: blob_relations2 ...[2024-05-16 07:28:35.107393] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:41.569 [2024-05-16 07:28:35.107415] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.569 [2024-05-16 07:28:35.107425] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:41.569 [2024-05-16 07:28:35.107432] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.569 [2024-05-16 07:28:35.107536] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:41.569 [2024-05-16 07:28:35.107552] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.570 [2024-05-16 07:28:35.107588] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:41.570 [2024-05-16 07:28:35.107597] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.570 passed 00:15:41.570 Test: blob_relations3 ...passed 00:15:41.907 Test: blobstore_clean_power_failure ...passed 00:15:41.907 Test: blob_delete_snapshot_power_failure ...[2024-05-16 07:28:35.242567] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:41.907 [2024-05-16 07:28:35.252408] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:41.907 [2024-05-16 07:28:35.252452] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:41.907 [2024-05-16 07:28:35.252462] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.907 [2024-05-16 07:28:35.262142] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:41.907 [2024-05-16 07:28:35.262175] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:41.907 [2024-05-16 07:28:35.262183] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:41.907 [2024-05-16 07:28:35.262190] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.907 [2024-05-16 07:28:35.271936] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:41.907 [2024-05-16 07:28:35.271964] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.907 [2024-05-16 07:28:35.281692] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:41.907 [2024-05-16 07:28:35.281717] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.907 [2024-05-16 07:28:35.291478] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:41.907 [2024-05-16 07:28:35.291509] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:41.907 passed 00:15:41.907 Test: blob_create_snapshot_power_failure ...[2024-05-16 07:28:35.320502] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:41.907 [2024-05-16 07:28:35.340337] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:41.907 [2024-05-16 07:28:35.350163] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:41.907 passed 00:15:41.907 Test: blob_io_unit ...passed 00:15:41.907 Test: blob_io_unit_compatibility ...passed 00:15:41.907 Test: blob_ext_md_pages ...passed 00:15:41.907 Test: blob_esnap_io_4096_4096 ...passed 00:15:42.187 Test: blob_esnap_io_512_512 ...passed 00:15:42.187 Test: blob_esnap_io_4096_512 ...passed 00:15:42.187 Test: blob_esnap_io_512_4096 ...passed 00:15:42.187 Test: blob_esnap_clone_resize ...passed 00:15:42.187 Suite: blob_bs_nocopy_noextent 00:15:42.187 Test: blob_open ...passed 00:15:42.187 Test: blob_create ...[2024-05-16 07:28:35.558039] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:42.187 passed 00:15:42.187 Test: blob_create_loop ...passed 00:15:42.187 Test: blob_create_fail ...[2024-05-16 07:28:35.628952] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:42.187 passed 00:15:42.187 Test: blob_create_internal ...passed 00:15:42.187 Test: blob_create_zero_extent ...passed 00:15:42.187 Test: blob_snapshot ...passed 00:15:42.187 Test: blob_clone ...passed 00:15:42.446 Test: blob_inflate ...[2024-05-16 07:28:35.781113] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:42.446 passed 00:15:42.446 Test: blob_delete ...passed 00:15:42.446 Test: blob_resize_test ...[2024-05-16 07:28:35.839045] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:42.446 passed 00:15:42.446 Test: blob_resize_thin_test ...passed 00:15:42.446 Test: channel_ops ...passed 00:15:42.446 Test: blob_super ...passed 00:15:42.446 Test: blob_rw_verify_iov ...passed 00:15:42.446 Test: blob_unmap ...passed 00:15:42.705 Test: blob_iter ...passed 00:15:42.705 Test: blob_parse_md ...passed 00:15:42.705 Test: bs_load_pending_removal ...passed 00:15:42.705 Test: bs_unload ...[2024-05-16 07:28:36.097666] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:42.705 passed 00:15:42.705 Test: bs_usable_clusters ...passed 00:15:42.705 Test: blob_crc ...[2024-05-16 07:28:36.155748] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:42.705 [2024-05-16 07:28:36.155815] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:42.705 passed 00:15:42.705 Test: blob_flags ...passed 00:15:42.705 Test: bs_version ...passed 00:15:42.705 Test: blob_set_xattrs_test ...[2024-05-16 07:28:36.242977] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:42.705 [2024-05-16 07:28:36.243036] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:42.705 passed 00:15:42.964 Test: blob_thin_prov_alloc ...passed 00:15:42.964 Test: blob_insert_cluster_msg_test ...passed 00:15:42.964 Test: blob_thin_prov_rw ...passed 00:15:42.964 Test: blob_thin_prov_rle ...passed 00:15:42.964 Test: blob_thin_prov_rw_iov ...passed 00:15:42.964 Test: blob_snapshot_rw ...passed 00:15:42.964 Test: blob_snapshot_rw_iov ...passed 00:15:43.222 Test: blob_inflate_rw ...passed 00:15:43.222 Test: blob_snapshot_freeze_io ...passed 00:15:43.222 Test: blob_operation_split_rw ...passed 00:15:43.222 Test: blob_operation_split_rw_iov ...passed 00:15:43.222 Test: blob_simultaneous_operations ...[2024-05-16 07:28:36.695930] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:43.222 [2024-05-16 07:28:36.695991] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:43.222 [2024-05-16 07:28:36.696240] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:43.222 [2024-05-16 07:28:36.696254] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:43.222 [2024-05-16 07:28:36.699363] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:43.222 [2024-05-16 07:28:36.699387] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:43.222 [2024-05-16 07:28:36.699405] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:43.222 [2024-05-16 07:28:36.699412] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:43.222 passed 00:15:43.222 Test: blob_persist_test ...passed 00:15:43.222 Test: blob_decouple_snapshot ...passed 00:15:43.480 Test: blob_seek_io_unit ...passed 00:15:43.480 Test: blob_nested_freezes ...passed 00:15:43.480 Test: blob_clone_resize ...passed 00:15:43.480 Test: blob_shallow_copy ...[2024-05-16 07:28:36.891068] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:43.480 [2024-05-16 07:28:36.891130] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:43.480 [2024-05-16 07:28:36.891139] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:43.480 passed 00:15:43.480 Suite: blob_blob_nocopy_noextent 00:15:43.480 Test: blob_write ...passed 00:15:43.480 Test: blob_read ...passed 00:15:43.480 Test: blob_rw_verify ...passed 00:15:43.480 Test: blob_rw_verify_iov_nomem ...passed 00:15:43.738 Test: blob_rw_iov_read_only ...passed 00:15:43.738 Test: blob_xattr ...passed 00:15:43.738 Test: blob_dirty_shutdown ...passed 00:15:43.738 Test: blob_is_degraded ...passed 00:15:43.738 Suite: blob_esnap_bs_nocopy_noextent 00:15:43.738 Test: blob_esnap_create ...passed 00:15:43.738 Test: blob_esnap_thread_add_remove ...passed 00:15:43.738 Test: blob_esnap_clone_snapshot ...passed 00:15:43.738 Test: blob_esnap_clone_inflate ...passed 00:15:43.738 Test: blob_esnap_clone_decouple ...passed 00:15:43.997 Test: blob_esnap_clone_reload ...passed 00:15:43.997 Test: blob_esnap_hotplug ...passed 00:15:43.997 Test: blob_set_parent ...[2024-05-16 07:28:37.359332] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:43.997 [2024-05-16 07:28:37.359403] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:43.997 [2024-05-16 07:28:37.359430] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:43.997 [2024-05-16 07:28:37.359450] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:43.997 [2024-05-16 07:28:37.359550] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:43.997 passed 00:15:43.997 Test: blob_set_external_parent ...[2024-05-16 07:28:37.387905] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:43.997 [2024-05-16 07:28:37.387954] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:43.997 [2024-05-16 07:28:37.387962] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:43.997 [2024-05-16 07:28:37.387996] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:43.997 passed 00:15:43.997 Suite: blob_nocopy_extent 00:15:43.997 Test: blob_init ...[2024-05-16 07:28:37.397479] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:43.997 passed 00:15:43.997 Test: blob_thin_provision ...passed 00:15:43.997 Test: blob_read_only ...passed 00:15:43.997 Test: bs_load ...[2024-05-16 07:28:37.435096] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:43.997 passed 00:15:43.997 Test: bs_load_custom_cluster_size ...passed 00:15:43.997 Test: bs_load_after_failed_grow ...passed 00:15:43.997 Test: bs_cluster_sz ...[2024-05-16 07:28:37.454072] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:43.997 [2024-05-16 07:28:37.454134] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:43.997 [2024-05-16 07:28:37.454144] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:43.997 passed 00:15:43.997 Test: bs_resize_md ...passed 00:15:43.997 Test: bs_destroy ...passed 00:15:43.997 Test: bs_type ...passed 00:15:43.997 Test: bs_super_block ...passed 00:15:43.997 Test: bs_test_recover_cluster_count ...passed 00:15:43.997 Test: bs_grow_live ...passed 00:15:43.997 Test: bs_grow_live_no_space ...passed 00:15:43.997 Test: bs_test_grow ...passed 00:15:43.997 Test: blob_serialize_test ...passed 00:15:43.997 Test: super_block_crc ...passed 00:15:43.997 Test: blob_thin_prov_write_count_io ...passed 00:15:44.256 Test: blob_thin_prov_unmap_cluster ...passed 00:15:44.256 Test: bs_load_iter_test ...passed 00:15:44.256 Test: blob_relations ...[2024-05-16 07:28:37.596126] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:44.256 [2024-05-16 07:28:37.596188] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.596262] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:44.256 [2024-05-16 07:28:37.596270] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 passed 00:15:44.256 Test: blob_relations2 ...[2024-05-16 07:28:37.606409] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:44.256 [2024-05-16 07:28:37.606435] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.606442] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:44.256 [2024-05-16 07:28:37.606448] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.606549] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:44.256 [2024-05-16 07:28:37.606558] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.606591] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:44.256 [2024-05-16 07:28:37.606597] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 passed 00:15:44.256 Test: blob_relations3 ...passed 00:15:44.256 Test: blobstore_clean_power_failure ...passed 00:15:44.256 Test: blob_delete_snapshot_power_failure ...[2024-05-16 07:28:37.739487] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:44.256 [2024-05-16 07:28:37.749058] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:44.256 [2024-05-16 07:28:37.758581] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:44.256 [2024-05-16 07:28:37.758625] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:44.256 [2024-05-16 07:28:37.758633] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.768150] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:44.256 [2024-05-16 07:28:37.768174] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:44.256 [2024-05-16 07:28:37.768181] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:44.256 [2024-05-16 07:28:37.768188] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.777779] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:44.256 [2024-05-16 07:28:37.777800] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:44.256 [2024-05-16 07:28:37.777807] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:44.256 [2024-05-16 07:28:37.777814] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.787352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:44.256 [2024-05-16 07:28:37.787371] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.796973] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:44.256 [2024-05-16 07:28:37.796996] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.256 [2024-05-16 07:28:37.806549] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:44.256 [2024-05-16 07:28:37.806575] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:44.514 passed 00:15:44.514 Test: blob_create_snapshot_power_failure ...[2024-05-16 07:28:37.835616] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:44.514 [2024-05-16 07:28:37.845369] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:44.514 [2024-05-16 07:28:37.864517] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:44.514 [2024-05-16 07:28:37.874082] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:44.514 passed 00:15:44.514 Test: blob_io_unit ...passed 00:15:44.514 Test: blob_io_unit_compatibility ...passed 00:15:44.514 Test: blob_ext_md_pages ...passed 00:15:44.514 Test: blob_esnap_io_4096_4096 ...passed 00:15:44.514 Test: blob_esnap_io_512_512 ...passed 00:15:44.514 Test: blob_esnap_io_4096_512 ...passed 00:15:44.514 Test: blob_esnap_io_512_4096 ...passed 00:15:44.514 Test: blob_esnap_clone_resize ...passed 00:15:44.514 Suite: blob_bs_nocopy_extent 00:15:44.514 Test: blob_open ...passed 00:15:44.514 Test: blob_create ...[2024-05-16 07:28:38.074012] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:44.772 passed 00:15:44.772 Test: blob_create_loop ...passed 00:15:44.772 Test: blob_create_fail ...[2024-05-16 07:28:38.143144] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:44.772 passed 00:15:44.772 Test: blob_create_internal ...passed 00:15:44.772 Test: blob_create_zero_extent ...passed 00:15:44.772 Test: blob_snapshot ...passed 00:15:44.772 Test: blob_clone ...passed 00:15:44.772 Test: blob_inflate ...[2024-05-16 07:28:38.290879] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:44.772 passed 00:15:44.772 Test: blob_delete ...passed 00:15:45.030 Test: blob_resize_test ...[2024-05-16 07:28:38.346957] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:45.030 passed 00:15:45.030 Test: blob_resize_thin_test ...passed 00:15:45.030 Test: channel_ops ...passed 00:15:45.030 Test: blob_super ...passed 00:15:45.030 Test: blob_rw_verify_iov ...passed 00:15:45.030 Test: blob_unmap ...passed 00:15:45.030 Test: blob_iter ...passed 00:15:45.030 Test: blob_parse_md ...passed 00:15:45.030 Test: bs_load_pending_removal ...passed 00:15:45.288 Test: bs_unload ...[2024-05-16 07:28:38.603653] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:45.288 passed 00:15:45.288 Test: bs_usable_clusters ...passed 00:15:45.288 Test: blob_crc ...[2024-05-16 07:28:38.660372] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:45.288 [2024-05-16 07:28:38.660437] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:45.288 passed 00:15:45.288 Test: blob_flags ...passed 00:15:45.288 Test: bs_version ...passed 00:15:45.288 Test: blob_set_xattrs_test ...[2024-05-16 07:28:38.744745] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:45.288 [2024-05-16 07:28:38.744794] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:45.288 passed 00:15:45.288 Test: blob_thin_prov_alloc ...passed 00:15:45.288 Test: blob_insert_cluster_msg_test ...passed 00:15:45.288 Test: blob_thin_prov_rw ...passed 00:15:45.547 Test: blob_thin_prov_rle ...passed 00:15:45.547 Test: blob_thin_prov_rw_iov ...passed 00:15:45.547 Test: blob_snapshot_rw ...passed 00:15:45.547 Test: blob_snapshot_rw_iov ...passed 00:15:45.547 Test: blob_inflate_rw ...passed 00:15:45.547 Test: blob_snapshot_freeze_io ...passed 00:15:45.547 Test: blob_operation_split_rw ...passed 00:15:45.807 Test: blob_operation_split_rw_iov ...passed 00:15:45.807 Test: blob_simultaneous_operations ...[2024-05-16 07:28:39.183880] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:45.807 [2024-05-16 07:28:39.183945] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:45.807 [2024-05-16 07:28:39.184194] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:45.807 [2024-05-16 07:28:39.184207] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:45.807 [2024-05-16 07:28:39.187303] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:45.807 [2024-05-16 07:28:39.187333] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:45.807 [2024-05-16 07:28:39.187349] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:45.807 [2024-05-16 07:28:39.187380] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:45.807 passed 00:15:45.807 Test: blob_persist_test ...passed 00:15:45.807 Test: blob_decouple_snapshot ...passed 00:15:45.807 Test: blob_seek_io_unit ...passed 00:15:45.807 Test: blob_nested_freezes ...passed 00:15:45.807 Test: blob_clone_resize ...passed 00:15:46.066 Test: blob_shallow_copy ...[2024-05-16 07:28:39.377394] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:46.066 [2024-05-16 07:28:39.377469] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:46.066 [2024-05-16 07:28:39.377478] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:46.066 passed 00:15:46.066 Suite: blob_blob_nocopy_extent 00:15:46.066 Test: blob_write ...passed 00:15:46.066 Test: blob_read ...passed 00:15:46.066 Test: blob_rw_verify ...passed 00:15:46.066 Test: blob_rw_verify_iov_nomem ...passed 00:15:46.066 Test: blob_rw_iov_read_only ...passed 00:15:46.066 Test: blob_xattr ...passed 00:15:46.066 Test: blob_dirty_shutdown ...passed 00:15:46.066 Test: blob_is_degraded ...passed 00:15:46.066 Suite: blob_esnap_bs_nocopy_extent 00:15:46.325 Test: blob_esnap_create ...passed 00:15:46.325 Test: blob_esnap_thread_add_remove ...passed 00:15:46.325 Test: blob_esnap_clone_snapshot ...passed 00:15:46.325 Test: blob_esnap_clone_inflate ...passed 00:15:46.325 Test: blob_esnap_clone_decouple ...passed 00:15:46.325 Test: blob_esnap_clone_reload ...passed 00:15:46.325 Test: blob_esnap_hotplug ...passed 00:15:46.325 Test: blob_set_parent ...[2024-05-16 07:28:39.836397] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:46.325 [2024-05-16 07:28:39.836464] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:46.325 [2024-05-16 07:28:39.836481] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:46.325 [2024-05-16 07:28:39.836490] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:46.325 [2024-05-16 07:28:39.836536] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:46.325 passed 00:15:46.325 Test: blob_set_external_parent ...[2024-05-16 07:28:39.864904] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:46.325 [2024-05-16 07:28:39.864937] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:46.325 [2024-05-16 07:28:39.864944] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:46.325 [2024-05-16 07:28:39.864997] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:46.325 passed 00:15:46.325 Suite: blob_copy_noextent 00:15:46.325 Test: blob_init ...[2024-05-16 07:28:39.874408] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:46.325 passed 00:15:46.326 Test: blob_thin_provision ...passed 00:15:46.584 Test: blob_read_only ...passed 00:15:46.584 Test: bs_load ...[2024-05-16 07:28:39.912346] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:46.584 passed 00:15:46.584 Test: bs_load_custom_cluster_size ...passed 00:15:46.584 Test: bs_load_after_failed_grow ...passed 00:15:46.584 Test: bs_cluster_sz ...[2024-05-16 07:28:39.931558] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:46.584 [2024-05-16 07:28:39.931614] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:46.584 [2024-05-16 07:28:39.931626] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:46.584 passed 00:15:46.584 Test: bs_resize_md ...passed 00:15:46.584 Test: bs_destroy ...passed 00:15:46.584 Test: bs_type ...passed 00:15:46.584 Test: bs_super_block ...passed 00:15:46.584 Test: bs_test_recover_cluster_count ...passed 00:15:46.584 Test: bs_grow_live ...passed 00:15:46.584 Test: bs_grow_live_no_space ...passed 00:15:46.584 Test: bs_test_grow ...passed 00:15:46.584 Test: blob_serialize_test ...passed 00:15:46.584 Test: super_block_crc ...passed 00:15:46.584 Test: blob_thin_prov_write_count_io ...passed 00:15:46.584 Test: blob_thin_prov_unmap_cluster ...passed 00:15:46.584 Test: bs_load_iter_test ...passed 00:15:46.584 Test: blob_relations ...[2024-05-16 07:28:40.072882] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:46.584 [2024-05-16 07:28:40.072938] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.584 [2024-05-16 07:28:40.073003] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:46.584 [2024-05-16 07:28:40.073011] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.584 passed 00:15:46.584 Test: blob_relations2 ...[2024-05-16 07:28:40.083001] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:46.584 [2024-05-16 07:28:40.083039] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.584 [2024-05-16 07:28:40.083047] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:46.584 [2024-05-16 07:28:40.083053] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.584 [2024-05-16 07:28:40.083138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:46.584 [2024-05-16 07:28:40.083146] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.584 [2024-05-16 07:28:40.083177] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:46.584 [2024-05-16 07:28:40.083184] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.584 passed 00:15:46.584 Test: blob_relations3 ...passed 00:15:46.864 Test: blobstore_clean_power_failure ...passed 00:15:46.864 Test: blob_delete_snapshot_power_failure ...[2024-05-16 07:28:40.215539] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:46.864 [2024-05-16 07:28:40.225132] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:46.864 [2024-05-16 07:28:40.225169] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:46.864 [2024-05-16 07:28:40.225178] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.864 [2024-05-16 07:28:40.235558] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:46.864 [2024-05-16 07:28:40.235581] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:46.864 [2024-05-16 07:28:40.235588] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:46.864 [2024-05-16 07:28:40.235595] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.864 [2024-05-16 07:28:40.245075] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:46.864 [2024-05-16 07:28:40.245095] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.864 [2024-05-16 07:28:40.254525] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:46.864 [2024-05-16 07:28:40.254564] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.864 [2024-05-16 07:28:40.264175] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:46.864 [2024-05-16 07:28:40.264196] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:46.864 passed 00:15:46.864 Test: blob_create_snapshot_power_failure ...[2024-05-16 07:28:40.292820] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:46.864 [2024-05-16 07:28:40.311784] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:15:46.864 [2024-05-16 07:28:40.321397] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:46.864 passed 00:15:46.864 Test: blob_io_unit ...passed 00:15:46.864 Test: blob_io_unit_compatibility ...passed 00:15:46.864 Test: blob_ext_md_pages ...passed 00:15:46.864 Test: blob_esnap_io_4096_4096 ...passed 00:15:46.864 Test: blob_esnap_io_512_512 ...passed 00:15:47.141 Test: blob_esnap_io_4096_512 ...passed 00:15:47.141 Test: blob_esnap_io_512_4096 ...passed 00:15:47.141 Test: blob_esnap_clone_resize ...passed 00:15:47.141 Suite: blob_bs_copy_noextent 00:15:47.141 Test: blob_open ...passed 00:15:47.141 Test: blob_create ...[2024-05-16 07:28:40.522775] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:47.141 passed 00:15:47.141 Test: blob_create_loop ...passed 00:15:47.141 Test: blob_create_fail ...[2024-05-16 07:28:40.591132] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:47.141 passed 00:15:47.141 Test: blob_create_internal ...passed 00:15:47.141 Test: blob_create_zero_extent ...passed 00:15:47.141 Test: blob_snapshot ...passed 00:15:47.399 Test: blob_clone ...passed 00:15:47.399 Test: blob_inflate ...[2024-05-16 07:28:40.736661] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:47.399 passed 00:15:47.399 Test: blob_delete ...passed 00:15:47.399 Test: blob_resize_test ...[2024-05-16 07:28:40.792933] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:47.399 passed 00:15:47.399 Test: blob_resize_thin_test ...passed 00:15:47.399 Test: channel_ops ...passed 00:15:47.399 Test: blob_super ...passed 00:15:47.399 Test: blob_rw_verify_iov ...passed 00:15:47.399 Test: blob_unmap ...passed 00:15:47.657 Test: blob_iter ...passed 00:15:47.657 Test: blob_parse_md ...passed 00:15:47.657 Test: bs_load_pending_removal ...passed 00:15:47.657 Test: bs_unload ...[2024-05-16 07:28:41.048249] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:47.657 passed 00:15:47.658 Test: bs_usable_clusters ...passed 00:15:47.658 Test: blob_crc ...[2024-05-16 07:28:41.104949] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:47.658 [2024-05-16 07:28:41.105001] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:47.658 passed 00:15:47.658 Test: blob_flags ...passed 00:15:47.658 Test: bs_version ...passed 00:15:47.658 Test: blob_set_xattrs_test ...[2024-05-16 07:28:41.189971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:47.658 [2024-05-16 07:28:41.190024] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:47.658 passed 00:15:47.916 Test: blob_thin_prov_alloc ...passed 00:15:47.916 Test: blob_insert_cluster_msg_test ...passed 00:15:47.916 Test: blob_thin_prov_rw ...passed 00:15:47.916 Test: blob_thin_prov_rle ...passed 00:15:47.916 Test: blob_thin_prov_rw_iov ...passed 00:15:47.916 Test: blob_snapshot_rw ...passed 00:15:47.916 Test: blob_snapshot_rw_iov ...passed 00:15:47.916 Test: blob_inflate_rw ...passed 00:15:48.174 Test: blob_snapshot_freeze_io ...passed 00:15:48.174 Test: blob_operation_split_rw ...passed 00:15:48.174 Test: blob_operation_split_rw_iov ...passed 00:15:48.174 Test: blob_simultaneous_operations ...[2024-05-16 07:28:41.625721] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:48.175 [2024-05-16 07:28:41.625796] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:48.175 [2024-05-16 07:28:41.626035] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:48.175 [2024-05-16 07:28:41.626047] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:48.175 [2024-05-16 07:28:41.628026] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:48.175 [2024-05-16 07:28:41.628051] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:48.175 [2024-05-16 07:28:41.628066] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:48.175 [2024-05-16 07:28:41.628073] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:48.175 passed 00:15:48.175 Test: blob_persist_test ...passed 00:15:48.175 Test: blob_decouple_snapshot ...passed 00:15:48.175 Test: blob_seek_io_unit ...passed 00:15:48.432 Test: blob_nested_freezes ...passed 00:15:48.432 Test: blob_clone_resize ...passed 00:15:48.432 Test: blob_shallow_copy ...[2024-05-16 07:28:41.814775] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:48.432 [2024-05-16 07:28:41.814832] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:48.432 [2024-05-16 07:28:41.814841] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:48.432 passed 00:15:48.432 Suite: blob_blob_copy_noextent 00:15:48.432 Test: blob_write ...passed 00:15:48.432 Test: blob_read ...passed 00:15:48.432 Test: blob_rw_verify ...passed 00:15:48.432 Test: blob_rw_verify_iov_nomem ...passed 00:15:48.432 Test: blob_rw_iov_read_only ...passed 00:15:48.691 Test: blob_xattr ...passed 00:15:48.691 Test: blob_dirty_shutdown ...passed 00:15:48.691 Test: blob_is_degraded ...passed 00:15:48.691 Suite: blob_esnap_bs_copy_noextent 00:15:48.691 Test: blob_esnap_create ...passed 00:15:48.691 Test: blob_esnap_thread_add_remove ...passed 00:15:48.691 Test: blob_esnap_clone_snapshot ...passed 00:15:48.691 Test: blob_esnap_clone_inflate ...passed 00:15:48.691 Test: blob_esnap_clone_decouple ...passed 00:15:48.691 Test: blob_esnap_clone_reload ...passed 00:15:48.691 Test: blob_esnap_hotplug ...passed 00:15:48.948 Test: blob_set_parent ...[2024-05-16 07:28:42.275536] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:48.948 [2024-05-16 07:28:42.275591] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:48.948 [2024-05-16 07:28:42.275610] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:48.948 [2024-05-16 07:28:42.275619] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:48.948 [2024-05-16 07:28:42.275663] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:48.948 passed 00:15:48.948 Test: blob_set_external_parent ...[2024-05-16 07:28:42.303973] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:48.948 [2024-05-16 07:28:42.304016] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:48.948 [2024-05-16 07:28:42.304024] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:48.948 [2024-05-16 07:28:42.304062] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:48.948 passed 00:15:48.948 Suite: blob_copy_extent 00:15:48.948 Test: blob_init ...[2024-05-16 07:28:42.313482] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:15:48.948 passed 00:15:48.948 Test: blob_thin_provision ...passed 00:15:48.948 Test: blob_read_only ...passed 00:15:48.948 Test: bs_load ...[2024-05-16 07:28:42.351358] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:15:48.948 passed 00:15:48.948 Test: bs_load_custom_cluster_size ...passed 00:15:48.948 Test: bs_load_after_failed_grow ...passed 00:15:48.948 Test: bs_cluster_sz ...[2024-05-16 07:28:42.370556] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:15:48.949 [2024-05-16 07:28:42.370624] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:15:48.949 [2024-05-16 07:28:42.370636] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:15:48.949 passed 00:15:48.949 Test: bs_resize_md ...passed 00:15:48.949 Test: bs_destroy ...passed 00:15:48.949 Test: bs_type ...passed 00:15:48.949 Test: bs_super_block ...passed 00:15:48.949 Test: bs_test_recover_cluster_count ...passed 00:15:48.949 Test: bs_grow_live ...passed 00:15:48.949 Test: bs_grow_live_no_space ...passed 00:15:48.949 Test: bs_test_grow ...passed 00:15:48.949 Test: blob_serialize_test ...passed 00:15:48.949 Test: super_block_crc ...passed 00:15:48.949 Test: blob_thin_prov_write_count_io ...passed 00:15:48.949 Test: blob_thin_prov_unmap_cluster ...passed 00:15:48.949 Test: bs_load_iter_test ...passed 00:15:48.949 Test: blob_relations ...[2024-05-16 07:28:42.507367] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:48.949 [2024-05-16 07:28:42.507418] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:48.949 [2024-05-16 07:28:42.507494] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:48.949 [2024-05-16 07:28:42.507502] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:48.949 passed 00:15:49.208 Test: blob_relations2 ...[2024-05-16 07:28:42.517480] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:49.208 [2024-05-16 07:28:42.517502] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.517510] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:49.208 [2024-05-16 07:28:42.517516] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.517614] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:49.208 [2024-05-16 07:28:42.517622] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.517654] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:15:49.208 [2024-05-16 07:28:42.517661] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 passed 00:15:49.208 Test: blob_relations3 ...passed 00:15:49.208 Test: blobstore_clean_power_failure ...passed 00:15:49.208 Test: blob_delete_snapshot_power_failure ...[2024-05-16 07:28:42.650318] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:49.208 [2024-05-16 07:28:42.659874] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:49.208 [2024-05-16 07:28:42.669432] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:49.208 [2024-05-16 07:28:42.669475] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:49.208 [2024-05-16 07:28:42.669484] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.678952] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:49.208 [2024-05-16 07:28:42.678975] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:49.208 [2024-05-16 07:28:42.678982] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:49.208 [2024-05-16 07:28:42.678990] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.688474] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:49.208 [2024-05-16 07:28:42.688496] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:15:49.208 [2024-05-16 07:28:42.688502] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:15:49.208 [2024-05-16 07:28:42.688509] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.697963] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:15:49.208 [2024-05-16 07:28:42.697997] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.707480] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:15:49.208 [2024-05-16 07:28:42.707515] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 [2024-05-16 07:28:42.717005] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:15:49.208 [2024-05-16 07:28:42.717026] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:49.208 passed 00:15:49.208 Test: blob_create_snapshot_power_failure ...[2024-05-16 07:28:42.745728] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:15:49.208 [2024-05-16 07:28:42.755268] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:15:49.208 [2024-05-16 07:28:42.774240] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:15:49.467 [2024-05-16 07:28:42.783763] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:15:49.467 passed 00:15:49.467 Test: blob_io_unit ...passed 00:15:49.467 Test: blob_io_unit_compatibility ...passed 00:15:49.467 Test: blob_ext_md_pages ...passed 00:15:49.467 Test: blob_esnap_io_4096_4096 ...passed 00:15:49.467 Test: blob_esnap_io_512_512 ...passed 00:15:49.467 Test: blob_esnap_io_4096_512 ...passed 00:15:49.467 Test: blob_esnap_io_512_4096 ...passed 00:15:49.467 Test: blob_esnap_clone_resize ...passed 00:15:49.467 Suite: blob_bs_copy_extent 00:15:49.467 Test: blob_open ...passed 00:15:49.467 Test: blob_create ...[2024-05-16 07:28:42.984666] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:15:49.467 passed 00:15:49.467 Test: blob_create_loop ...passed 00:15:49.726 Test: blob_create_fail ...[2024-05-16 07:28:43.053537] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:49.726 passed 00:15:49.726 Test: blob_create_internal ...passed 00:15:49.726 Test: blob_create_zero_extent ...passed 00:15:49.726 Test: blob_snapshot ...passed 00:15:49.726 Test: blob_clone ...passed 00:15:49.726 Test: blob_inflate ...[2024-05-16 07:28:43.199854] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:15:49.726 passed 00:15:49.726 Test: blob_delete ...passed 00:15:49.726 Test: blob_resize_test ...[2024-05-16 07:28:43.256130] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:15:49.726 passed 00:15:49.986 Test: blob_resize_thin_test ...passed 00:15:49.986 Test: channel_ops ...passed 00:15:49.986 Test: blob_super ...passed 00:15:49.986 Test: blob_rw_verify_iov ...passed 00:15:49.986 Test: blob_unmap ...passed 00:15:49.986 Test: blob_iter ...passed 00:15:49.986 Test: blob_parse_md ...passed 00:15:49.986 Test: bs_load_pending_removal ...passed 00:15:49.986 Test: bs_unload ...[2024-05-16 07:28:43.509508] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:15:49.986 passed 00:15:49.986 Test: bs_usable_clusters ...passed 00:15:50.246 Test: blob_crc ...[2024-05-16 07:28:43.565947] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:50.246 [2024-05-16 07:28:43.565996] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:15:50.246 passed 00:15:50.246 Test: blob_flags ...passed 00:15:50.246 Test: bs_version ...passed 00:15:50.246 Test: blob_set_xattrs_test ...[2024-05-16 07:28:43.650797] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:50.246 [2024-05-16 07:28:43.650853] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:15:50.246 passed 00:15:50.246 Test: blob_thin_prov_alloc ...passed 00:15:50.246 Test: blob_insert_cluster_msg_test ...passed 00:15:50.246 Test: blob_thin_prov_rw ...passed 00:15:50.246 Test: blob_thin_prov_rle ...passed 00:15:50.246 Test: blob_thin_prov_rw_iov ...passed 00:15:50.503 Test: blob_snapshot_rw ...passed 00:15:50.503 Test: blob_snapshot_rw_iov ...passed 00:15:50.503 Test: blob_inflate_rw ...passed 00:15:50.503 Test: blob_snapshot_freeze_io ...passed 00:15:50.503 Test: blob_operation_split_rw ...passed 00:15:50.503 Test: blob_operation_split_rw_iov ...passed 00:15:50.763 Test: blob_simultaneous_operations ...[2024-05-16 07:28:44.087543] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:50.763 [2024-05-16 07:28:44.087619] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:50.763 [2024-05-16 07:28:44.087858] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:50.763 [2024-05-16 07:28:44.087872] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:50.763 [2024-05-16 07:28:44.089831] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:50.763 [2024-05-16 07:28:44.089853] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:50.763 [2024-05-16 07:28:44.089870] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:15:50.763 [2024-05-16 07:28:44.089876] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:15:50.763 passed 00:15:50.763 Test: blob_persist_test ...passed 00:15:50.763 Test: blob_decouple_snapshot ...passed 00:15:50.763 Test: blob_seek_io_unit ...passed 00:15:50.763 Test: blob_nested_freezes ...passed 00:15:50.763 Test: blob_clone_resize ...passed 00:15:50.763 Test: blob_shallow_copy ...[2024-05-16 07:28:44.277362] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:15:50.763 [2024-05-16 07:28:44.277419] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:15:50.763 [2024-05-16 07:28:44.277429] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:15:50.763 passed 00:15:50.763 Suite: blob_blob_copy_extent 00:15:50.763 Test: blob_write ...passed 00:15:51.021 Test: blob_read ...passed 00:15:51.021 Test: blob_rw_verify ...passed 00:15:51.021 Test: blob_rw_verify_iov_nomem ...passed 00:15:51.021 Test: blob_rw_iov_read_only ...passed 00:15:51.021 Test: blob_xattr ...passed 00:15:51.021 Test: blob_dirty_shutdown ...passed 00:15:51.021 Test: blob_is_degraded ...passed 00:15:51.021 Suite: blob_esnap_bs_copy_extent 00:15:51.021 Test: blob_esnap_create ...passed 00:15:51.021 Test: blob_esnap_thread_add_remove ...passed 00:15:51.279 Test: blob_esnap_clone_snapshot ...passed 00:15:51.279 Test: blob_esnap_clone_inflate ...passed 00:15:51.279 Test: blob_esnap_clone_decouple ...passed 00:15:51.279 Test: blob_esnap_clone_reload ...passed 00:15:51.279 Test: blob_esnap_hotplug ...passed 00:15:51.279 Test: blob_set_parent ...[2024-05-16 07:28:44.737605] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:15:51.279 [2024-05-16 07:28:44.737663] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:15:51.279 [2024-05-16 07:28:44.737681] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:15:51.279 [2024-05-16 07:28:44.737690] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:15:51.279 [2024-05-16 07:28:44.737890] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:51.279 passed 00:15:51.279 Test: blob_set_external_parent ...[2024-05-16 07:28:44.766071] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:15:51.280 [2024-05-16 07:28:44.766120] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:15:51.280 [2024-05-16 07:28:44.766128] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:15:51.280 [2024-05-16 07:28:44.766178] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:15:51.280 passed 00:15:51.280 00:15:51.280 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.280 suites 16 16 n/a 0 0 00:15:51.280 tests 376 376 376 0 0 00:15:51.280 asserts 143965 143965 143965 0 n/a 00:15:51.280 00:15:51.280 Elapsed time = 9.914 seconds 00:15:51.280 07:28:44 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:15:51.280 00:15:51.280 00:15:51.280 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.280 http://cunit.sourceforge.net/ 00:15:51.280 00:15:51.280 00:15:51.280 Suite: blob_bdev 00:15:51.280 Test: create_bs_dev ...passed 00:15:51.280 Test: create_bs_dev_ro ...passed 00:15:51.280 Test: create_bs_dev_rw ...passed 00:15:51.280 Test: claim_bs_dev ...[2024-05-16 07:28:44.785472] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:15:51.280 [2024-05-16 07:28:44.785659] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:15:51.280 passed 00:15:51.280 Test: claim_bs_dev_ro ...passed 00:15:51.280 Test: deferred_destroy_refs ...passed 00:15:51.280 Test: deferred_destroy_channels ...passed 00:15:51.280 Test: deferred_destroy_threads ...passed 00:15:51.280 00:15:51.280 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.280 suites 1 1 n/a 0 0 00:15:51.280 tests 8 8 8 0 0 00:15:51.280 asserts 119 119 119 0 n/a 00:15:51.280 00:15:51.280 Elapsed time = 0.000 seconds 00:15:51.280 07:28:44 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:15:51.280 00:15:51.280 00:15:51.280 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.280 http://cunit.sourceforge.net/ 00:15:51.280 00:15:51.280 00:15:51.280 Suite: tree 00:15:51.280 Test: blobfs_tree_op_test ...passed 00:15:51.280 00:15:51.280 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.280 suites 1 1 n/a 0 0 00:15:51.280 tests 1 1 1 0 0 00:15:51.280 asserts 27 27 27 0 n/a 00:15:51.280 00:15:51.280 Elapsed time = 0.000 seconds 00:15:51.280 07:28:44 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:15:51.280 00:15:51.280 00:15:51.280 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.280 http://cunit.sourceforge.net/ 00:15:51.280 00:15:51.280 00:15:51.280 Suite: blobfs_async_ut 00:15:51.280 Test: fs_init ...passed 00:15:51.538 Test: fs_open ...passed 00:15:51.538 Test: fs_create ...passed 00:15:51.538 Test: fs_truncate ...passed 00:15:51.538 Test: fs_rename ...[2024-05-16 07:28:44.883195] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:15:51.538 passed 00:15:51.538 Test: fs_rw_async ...passed 00:15:51.538 Test: fs_writev_readv_async ...passed 00:15:51.538 Test: tree_find_buffer_ut ...passed 00:15:51.538 Test: channel_ops ...passed 00:15:51.538 Test: channel_ops_sync ...passed 00:15:51.538 00:15:51.538 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.538 suites 1 1 n/a 0 0 00:15:51.538 tests 10 10 10 0 0 00:15:51.538 asserts 292 292 292 0 n/a 00:15:51.538 00:15:51.538 Elapsed time = 0.117 seconds 00:15:51.538 07:28:44 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:15:51.538 00:15:51.538 00:15:51.538 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.538 http://cunit.sourceforge.net/ 00:15:51.538 00:15:51.538 00:15:51.538 Suite: blobfs_sync_ut 00:15:51.538 Test: cache_read_after_write ...[2024-05-16 07:28:44.981591] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:15:51.538 passed 00:15:51.538 Test: file_length ...passed 00:15:51.538 Test: append_write_to_extend_blob ...passed 00:15:51.538 Test: partial_buffer ...passed 00:15:51.538 Test: cache_write_null_buffer ...passed 00:15:51.538 Test: fs_create_sync ...passed 00:15:51.538 Test: fs_rename_sync ...passed 00:15:51.538 Test: cache_append_no_cache ...passed 00:15:51.538 Test: fs_delete_file_without_close ...passed 00:15:51.538 00:15:51.538 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.538 suites 1 1 n/a 0 0 00:15:51.538 tests 9 9 9 0 0 00:15:51.538 asserts 345 345 345 0 n/a 00:15:51.538 00:15:51.538 Elapsed time = 0.250 seconds 00:15:51.538 07:28:45 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:15:51.538 00:15:51.538 00:15:51.538 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.538 http://cunit.sourceforge.net/ 00:15:51.538 00:15:51.538 00:15:51.538 Suite: blobfs_bdev_ut 00:15:51.538 Test: spdk_blobfs_bdev_detect_test ...passed 00:15:51.538 Test: spdk_blobfs_bdev_create_test ...passed 00:15:51.538 Test: spdk_blobfs_bdev_mount_test ...passed 00:15:51.538 00:15:51.538 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.538 suites 1 1 n/a 0 0 00:15:51.538 tests 3 3 3 0 0 00:15:51.538 asserts 9 9 9 0 n/a 00:15:51.538 00:15:51.538 Elapsed time = 0.000 seconds 00:15:51.538 [2024-05-16 07:28:45.076296] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:15:51.538 [2024-05-16 07:28:45.076490] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:15:51.538 00:15:51.538 real 0m10.222s 00:15:51.538 user 0m10.233s 00:15:51.538 sys 0m0.121s 00:15:51.538 07:28:45 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.538 ************************************ 00:15:51.538 END TEST unittest_blob_blobfs 00:15:51.538 ************************************ 00:15:51.538 07:28:45 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:15:51.799 07:28:45 unittest -- unit/unittest.sh@233 -- # run_test unittest_event unittest_event 00:15:51.799 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:51.799 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.799 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:51.799 ************************************ 00:15:51.799 START TEST unittest_event 00:15:51.799 ************************************ 00:15:51.799 07:28:45 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:15:51.799 07:28:45 unittest.unittest_event -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:15:51.799 00:15:51.799 00:15:51.799 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.799 http://cunit.sourceforge.net/ 00:15:51.799 00:15:51.799 00:15:51.799 Suite: app_suite 00:15:51.799 Test: test_spdk_app_parse_args ...app_ut [options] 00:15:51.799 00:15:51.799 CPU options: 00:15:51.799 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:51.799 (like [0,1,10]) 00:15:51.799 --lcores lcore to CPU mapping list. The list is in the format: 00:15:51.799 [<,lcores[@CPUs]>...] 00:15:51.799 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:51.799 Within the group, '-' is used for range separator, 00:15:51.799 ',' is used for single number separator. 00:15:51.799 '( )' can be omitted for single element group, 00:15:51.799 '@' can be omitted if cpus and lcores have the same value 00:15:51.799 --disable-cpumask-locks Disable CPU core lock files. 00:15:51.799 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:51.799 pollers in the app support interrupt mode) 00:15:51.799 -p, --main-core main (primary) core for DPDK 00:15:51.799 00:15:51.799 Configuration options: 00:15:51.799 -c, --config, --json JSON config file 00:15:51.799 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:51.799 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:51.799 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:51.799 --rpcs-allowed comma-separated list of permitted RPCS 00:15:51.799 app_ut: invalid option -- z 00:15:51.799 --json-ignore-init-errors don't exit on invalid config entry 00:15:51.799 00:15:51.799 Memory options: 00:15:51.799 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:51.799 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:51.799 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:51.799 -R, --huge-unlink unlink huge files after initialization 00:15:51.799 -n, --mem-channels number of memory channels used for DPDK 00:15:51.799 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:15:51.799 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:51.799 --no-huge run without using hugepages 00:15:51.799 -i, --shm-id shared memory ID (optional) 00:15:51.799 -g, --single-file-segments force creating just one hugetlbfs file 00:15:51.799 00:15:51.799 PCI options: 00:15:51.799 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:51.799 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:51.799 -u, --no-pci disable PCI access 00:15:51.799 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:51.799 00:15:51.799 Log options: 00:15:51.799 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:15:51.799 --silence-noticelog disable notice level logging to stderr 00:15:51.799 00:15:51.799 Trace options: 00:15:51.799 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:51.799 setting 0 to disable trace (default 32768) 00:15:51.799 Tracepoints vary in size and can use more than one trace entry. 00:15:51.799 -e, --tpoint-group [:] 00:15:51.799 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:15:51.799 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:51.799 a tracepoint group. First tpoint inside a group can be enabled by 00:15:51.799 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:51.799 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:51.799 in /include/spdk_internal/trace_defs.h 00:15:51.799 00:15:51.799 Other options: 00:15:51.799 -h, --help show this usage 00:15:51.799 -v, --version print SPDK version 00:15:51.799 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:51.799 --env-context Opaque context for use of the env implementation 00:15:51.799 app_ut [options] 00:15:51.799 00:15:51.800 CPU options: 00:15:51.800 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:51.800 (like [0,1,10]) 00:15:51.800 --lcores lcore to CPU mapping list. The list is in the format: 00:15:51.800 [<,lcores[@CPUs]>...] 00:15:51.800 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:51.800 Within the group, '-' is used for range separator, 00:15:51.800 app_ut: unrecognized option `--test-long-opt' 00:15:51.800 ',' is used for single number separator. 00:15:51.800 '( )' can be omitted for single element group, 00:15:51.800 '@' can be omitted if cpus and lcores have the same value 00:15:51.800 --disable-cpumask-locks Disable CPU core lock files. 00:15:51.800 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:51.800 pollers in the app support interrupt mode) 00:15:51.800 -p, --main-core main (primary) core for DPDK 00:15:51.800 00:15:51.800 Configuration options: 00:15:51.800 -c, --config, --json JSON config file 00:15:51.800 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:51.800 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:51.800 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:51.800 --rpcs-allowed comma-separated list of permitted RPCS 00:15:51.800 --json-ignore-init-errors don't exit on invalid config entry 00:15:51.800 00:15:51.800 Memory options: 00:15:51.800 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:51.800 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:51.800 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:51.800 -R, --huge-unlink unlink huge files after initialization 00:15:51.800 -n, --mem-channels number of memory channels used for DPDK 00:15:51.800 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:15:51.800 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:51.800 --no-huge run without using hugepages 00:15:51.800 -i, --shm-id shared memory ID (optional) 00:15:51.800 -g, --single-file-segments force creating just one hugetlbfs file 00:15:51.800 00:15:51.800 PCI options: 00:15:51.800 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:51.800 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:51.800 -u, --no-pci disable PCI access 00:15:51.800 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:51.800 00:15:51.800 Log options: 00:15:51.800 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:15:51.800 --silence-noticelog disable notice level logging to stderr 00:15:51.800 00:15:51.800 Trace options: 00:15:51.800 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:51.800 setting 0 to disable trace (default 32768) 00:15:51.800 Tracepoints vary in size and can use more than one trace entry. 00:15:51.800 -e, --tpoint-group [:] 00:15:51.800 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:15:51.800 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:51.800 a tracepoint group. First tpoint inside a group can be enabled by 00:15:51.800 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:51.800 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:51.800 in /include/spdk_internal/trace_defs.h 00:15:51.800 00:15:51.800 Other options: 00:15:51.800 -h, --help show this usage 00:15:51.800 -v, --version print SPDK version 00:15:51.800 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:51.800 --env-context Opaque context for use of the env implementation 00:15:51.800 app_ut [options] 00:15:51.800 00:15:51.800 CPU options: 00:15:51.800 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:15:51.800 (like [0,1,10]) 00:15:51.800 --lcores lcore to CPU mapping list. The list is in the format: 00:15:51.800 [<,lcores[@CPUs]>...] 00:15:51.800 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:15:51.800 Within the group, '-' is used for range separator, 00:15:51.800 ',' is used for single number separator. 00:15:51.800 '( )' can be omitted for single element group, 00:15:51.800 '@' can be omitted if cpus and lcores have the same value 00:15:51.800 --disable-cpumask-locks Disable CPU core lock files. 00:15:51.800 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:15:51.800 pollers in the app support interrupt mode) 00:15:51.800 -p, --main-core main (primary) core for DPDK 00:15:51.800 00:15:51.800 Configuration options: 00:15:51.800 -c, --config, --json JSON config file 00:15:51.800 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:15:51.800 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:15:51.800 --wait-for-rpc wait for RPCs to initialize subsystems 00:15:51.800 --rpcs-allowed comma-separated list of permitted RPCS 00:15:51.800 --json-ignore-init-errors don't exit on invalid config entry 00:15:51.800 00:15:51.800 Memory options: 00:15:51.800 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:15:51.800 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:15:51.800 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:15:51.800 -R, --huge-unlink unlink huge files after initialization 00:15:51.800 -n, --mem-channels number of memory channels used for DPDK 00:15:51.800 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:15:51.800 --msg-mempool-size global message memory pool size in count (default: 262143) 00:15:51.800 --no-huge run without using hugepages 00:15:51.800 -i, --shm-id shared memory ID (optional) 00:15:51.800 -g, --single-file-segments force creating just one hugetlbfs file 00:15:51.800 00:15:51.800 PCI options: 00:15:51.800 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:15:51.800 -B, --pci-blocked pci addr to block (can be used more than once) 00:15:51.800 -u, --no-pci disable PCI access 00:15:51.800 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:15:51.800 00:15:51.800 Log options: 00:15:51.800 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:15:51.800 --silence-noticelog disable notice level logging to stderr 00:15:51.800 00:15:51.800 Trace options: 00:15:51.800 --num-trace-entries number of trace entries for each core, must be power of 2, 00:15:51.800 setting 0 to disable trace (default 32768) 00:15:51.800 Tracepoints vary in size and can use more than one trace entry. 00:15:51.800 -e, --tpoint-group [:] 00:15:51.800 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:15:51.800 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:15:51.800 a tracepoint group. First tpoint inside a group can be enabled by 00:15:51.800 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:15:51.800 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:15:51.800 in /include/spdk_internal/trace_defs.h 00:15:51.800 00:15:51.800 Other options: 00:15:51.800 -h, --help show this usage 00:15:51.800 -v, --version print SPDK version 00:15:51.800 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:15:51.800 --env-context Opaque context for use of the env implementation 00:15:51.800 [2024-05-16 07:28:45.118215] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:15:51.800 [2024-05-16 07:28:45.118434] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:15:51.800 passed 00:15:51.800 00:15:51.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.800 suites 1 1 n/a 0 0 00:15:51.800 tests 1 1 1 0 0 00:15:51.800 asserts 8 8 8 0 n/a 00:15:51.800 00:15:51.800 Elapsed time = 0.000 seconds 00:15:51.800 [2024-05-16 07:28:45.118529] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:15:51.800 07:28:45 unittest.unittest_event -- unit/unittest.sh@52 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:15:51.800 00:15:51.800 00:15:51.800 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.800 http://cunit.sourceforge.net/ 00:15:51.800 00:15:51.800 00:15:51.800 Suite: app_suite 00:15:51.800 Test: test_create_reactor ...passed 00:15:51.800 Test: test_init_reactors ...passed 00:15:51.800 Test: test_event_call ...passed 00:15:51.800 Test: test_schedule_thread ...passed 00:15:51.800 Test: test_reschedule_thread ...passed 00:15:51.800 Test: test_bind_thread ...passed 00:15:51.800 Test: test_for_each_reactor ...passed 00:15:51.800 Test: test_reactor_stats ...passed 00:15:51.800 Test: test_scheduler ...passed 00:15:51.800 Test: test_governor ...passed 00:15:51.800 00:15:51.800 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.800 suites 1 1 n/a 0 0 00:15:51.800 tests 10 10 10 0 0 00:15:51.800 asserts 336 336 336 0 n/a 00:15:51.800 00:15:51.800 Elapsed time = 0.000 seconds 00:15:51.800 00:15:51.800 real 0m0.015s 00:15:51.800 user 0m0.012s 00:15:51.800 sys 0m0.004s 00:15:51.800 07:28:45 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.800 07:28:45 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:15:51.800 ************************************ 00:15:51.800 END TEST unittest_event 00:15:51.800 ************************************ 00:15:51.800 07:28:45 unittest -- unit/unittest.sh@234 -- # uname -s 00:15:51.800 07:28:45 unittest -- unit/unittest.sh@234 -- # '[' FreeBSD = Linux ']' 00:15:51.801 07:28:45 unittest -- unit/unittest.sh@238 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 START TEST unittest_accel 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:15:51.801 00:15:51.801 00:15:51.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.801 http://cunit.sourceforge.net/ 00:15:51.801 00:15:51.801 00:15:51.801 Suite: accel_sequence 00:15:51.801 Test: test_sequence_fill_copy ...passed 00:15:51.801 Test: test_sequence_abort ...passed 00:15:51.801 Test: test_sequence_append_error ...passed 00:15:51.801 Test: test_sequence_completion_error ...[2024-05-16 07:28:45.176071] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1902:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82d1bffc0 00:15:51.801 [2024-05-16 07:28:45.176351] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1902:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82d1bffc0 00:15:51.801 [2024-05-16 07:28:45.176395] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1812:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82d1bffc0 00:15:51.801 [2024-05-16 07:28:45.176425] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1812:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82d1bffc0 00:15:51.801 passed 00:15:51.801 Test: test_sequence_decompress ...passed 00:15:51.801 Test: test_sequence_reverse ...passed 00:15:51.801 Test: test_sequence_copy_elision ...passed 00:15:51.801 Test: test_sequence_accel_buffers ...passed 00:15:51.801 Test: test_sequence_memory_domain ...[2024-05-16 07:28:45.177875] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1704:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:15:51.801 passed 00:15:51.801 Test: test_sequence_module_memory_domain ...[2024-05-16 07:28:45.177939] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1743:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:15:51.801 passed 00:15:51.801 Test: test_sequence_crypto ...passed 00:15:51.801 Test: test_sequence_driver ...[2024-05-16 07:28:45.178693] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1851:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82d1c0e00 using driver: ut 00:15:51.801 passed 00:15:51.801 Test: test_sequence_same_iovs ...[2024-05-16 07:28:45.178740] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1916:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82d1c0e00 through driver: ut 00:15:51.801 passed 00:15:51.801 Test: test_sequence_crc32 ...passed 00:15:51.801 Suite: accel 00:15:51.801 Test: test_spdk_accel_task_complete ...passed 00:15:51.801 Test: test_get_task ...passed 00:15:51.801 Test: test_spdk_accel_submit_copy ...passed 00:15:51.801 Test: test_spdk_accel_submit_dualcast ...[2024-05-16 07:28:45.179377] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:15:51.801 [2024-05-16 07:28:45.179396] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:15:51.801 passed 00:15:51.801 Test: test_spdk_accel_submit_compare ...passed 00:15:51.801 Test: test_spdk_accel_submit_fill ...passed 00:15:51.801 Test: test_spdk_accel_submit_crc32c ...passed 00:15:51.801 Test: test_spdk_accel_submit_crc32cv ...passed 00:15:51.801 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:15:51.801 Test: test_spdk_accel_submit_xor ...passed 00:15:51.801 Test: test_spdk_accel_module_find_by_name ...passed 00:15:51.801 Test: test_spdk_accel_module_register ...passed 00:15:51.801 00:15:51.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.801 suites 2 2 n/a 0 0 00:15:51.801 tests 26 26 26 0 0 00:15:51.801 asserts 827 827 827 0 n/a 00:15:51.801 00:15:51.801 Elapsed time = 0.008 seconds 00:15:51.801 00:15:51.801 real 0m0.013s 00:15:51.801 user 0m0.013s 00:15:51.801 sys 0m0.000s 00:15:51.801 07:28:45 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.801 07:28:45 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 END TEST unittest_accel 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest -- unit/unittest.sh@239 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 START TEST unittest_ioat 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:15:51.801 00:15:51.801 00:15:51.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.801 http://cunit.sourceforge.net/ 00:15:51.801 00:15:51.801 00:15:51.801 Suite: ioat 00:15:51.801 Test: ioat_state_check ...passed 00:15:51.801 00:15:51.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.801 suites 1 1 n/a 0 0 00:15:51.801 tests 1 1 1 0 0 00:15:51.801 asserts 32 32 32 0 n/a 00:15:51.801 00:15:51.801 Elapsed time = 0.000 seconds 00:15:51.801 00:15:51.801 real 0m0.004s 00:15:51.801 user 0m0.000s 00:15:51.801 sys 0m0.008s 00:15:51.801 07:28:45 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.801 ************************************ 00:15:51.801 END TEST unittest_ioat 00:15:51.801 07:28:45 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest -- unit/unittest.sh@240 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:51.801 07:28:45 unittest -- unit/unittest.sh@241 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 START TEST unittest_idxd_user 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:15:51.801 00:15:51.801 00:15:51.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.801 http://cunit.sourceforge.net/ 00:15:51.801 00:15:51.801 00:15:51.801 Suite: idxd_user 00:15:51.801 Test: test_idxd_wait_cmd ...passed 00:15:51.801 Test: test_idxd_reset_dev ...[2024-05-16 07:28:45.261506] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:15:51.801 [2024-05-16 07:28:45.261786] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:15:51.801 [2024-05-16 07:28:45.261829] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:15:51.801 passed 00:15:51.801 Test: test_idxd_group_config ...passed 00:15:51.801 Test: test_idxd_wq_config ...passed 00:15:51.801 00:15:51.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.801 suites 1 1 n/a 0 0 00:15:51.801 tests 4 4 4 0 0 00:15:51.801 asserts 20 20 20 0 n/a 00:15:51.801 00:15:51.801 Elapsed time = 0.000 seconds 00:15:51.801 [2024-05-16 07:28:45.261861] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:15:51.801 00:15:51.801 real 0m0.006s 00:15:51.801 user 0m0.000s 00:15:51.801 sys 0m0.008s 00:15:51.801 07:28:45 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.801 07:28:45 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 END TEST unittest_idxd_user 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest -- unit/unittest.sh@243 -- # run_test unittest_iscsi unittest_iscsi 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.801 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:51.801 ************************************ 00:15:51.801 START TEST unittest_iscsi 00:15:51.801 ************************************ 00:15:51.801 07:28:45 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:15:51.801 07:28:45 unittest.unittest_iscsi -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:15:51.801 00:15:51.801 00:15:51.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.801 http://cunit.sourceforge.net/ 00:15:51.801 00:15:51.801 00:15:51.801 Suite: conn_suite 00:15:51.801 Test: read_task_split_in_order_case ...passed 00:15:51.801 Test: read_task_split_reverse_order_case ...passed 00:15:51.801 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:15:51.801 Test: process_non_read_task_completion_test ...passed 00:15:51.801 Test: free_tasks_on_connection ...passed 00:15:51.801 Test: free_tasks_with_queued_datain ...passed 00:15:51.801 Test: abort_queued_datain_task_test ...passed 00:15:51.801 Test: abort_queued_datain_tasks_test ...passed 00:15:51.801 00:15:51.801 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.801 suites 1 1 n/a 0 0 00:15:51.801 tests 8 8 8 0 0 00:15:51.801 asserts 230 230 230 0 n/a 00:15:51.801 00:15:51.801 Elapsed time = 0.000 seconds 00:15:51.801 07:28:45 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:15:51.801 00:15:51.801 00:15:51.801 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.801 http://cunit.sourceforge.net/ 00:15:51.801 00:15:51.801 00:15:51.801 Suite: iscsi_suite 00:15:51.801 Test: param_negotiation_test ...passed 00:15:51.801 Test: list_negotiation_test ...passed 00:15:51.801 Test: parse_valid_test ...passed 00:15:51.802 Test: parse_invalid_test ...[2024-05-16 07:28:45.305796] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:15:51.802 [2024-05-16 07:28:45.306037] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:15:51.802 [2024-05-16 07:28:45.306056] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:15:51.802 [2024-05-16 07:28:45.306087] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:15:51.802 [2024-05-16 07:28:45.306107] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:15:51.802 [2024-05-16 07:28:45.306122] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:15:51.802 passed 00:15:51.802 00:15:51.802 [2024-05-16 07:28:45.306136] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:15:51.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.802 suites 1 1 n/a 0 0 00:15:51.802 tests 4 4 4 0 0 00:15:51.802 asserts 161 161 161 0 n/a 00:15:51.802 00:15:51.802 Elapsed time = 0.000 seconds 00:15:51.802 07:28:45 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:15:51.802 00:15:51.802 00:15:51.802 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.802 http://cunit.sourceforge.net/ 00:15:51.802 00:15:51.802 00:15:51.802 Suite: iscsi_target_node_suite 00:15:51.802 Test: add_lun_test_cases ...passed 00:15:51.802 Test: allow_any_allowed ...passed 00:15:51.802 Test: allow_ipv6_allowed ...passed 00:15:51.802 Test: allow_ipv6_denied ...passed 00:15:51.802 Test: allow_ipv6_invalid ...passed 00:15:51.802 Test: allow_ipv4_allowed ...passed 00:15:51.802 Test: allow_ipv4_denied ...passed 00:15:51.802 Test: allow_ipv4_invalid ...passed 00:15:51.802 Test: node_access_allowed ...passed 00:15:51.802 Test: node_access_denied_by_empty_netmask ...passed 00:15:51.802 Test: node_access_multi_initiator_groups_cases ...passed 00:15:51.802 Test: allow_iscsi_name_multi_maps_case ...passed 00:15:51.802 Test: chap_param_test_cases ...passed 00:15:51.802 00:15:51.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.802 suites 1 1 n/a 0 0 00:15:51.802 tests 13 13 13 0 0 00:15:51.802 asserts 50 50 50 0 n/a 00:15:51.802 00:15:51.802 Elapsed time = 0.000 seconds 00:15:51.802 [2024-05-16 07:28:45.310873] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:15:51.802 [2024-05-16 07:28:45.311067] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:15:51.802 [2024-05-16 07:28:45.311099] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:15:51.802 [2024-05-16 07:28:45.311117] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:15:51.802 [2024-05-16 07:28:45.311134] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:15:51.802 [2024-05-16 07:28:45.311277] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:15:51.802 [2024-05-16 07:28:45.311289] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:15:51.802 [2024-05-16 07:28:45.311298] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:15:51.802 [2024-05-16 07:28:45.311306] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:15:51.802 [2024-05-16 07:28:45.311315] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:15:51.802 07:28:45 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:15:51.802 00:15:51.802 00:15:51.802 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.802 http://cunit.sourceforge.net/ 00:15:51.802 00:15:51.802 00:15:51.802 Suite: iscsi_suite 00:15:51.802 Test: op_login_check_target_test ...passed 00:15:51.802 Test: op_login_session_normal_test ...[2024-05-16 07:28:45.318006] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:15:51.802 [2024-05-16 07:28:45.318235] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:15:51.802 [2024-05-16 07:28:45.318421] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:15:51.802 [2024-05-16 07:28:45.318447] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:15:51.802 passed 00:15:51.802 Test: maxburstlength_test ...passed 00:15:51.802 Test: underflow_for_read_transfer_test ...passed 00:15:51.802 Test: underflow_for_zero_read_transfer_test ...passed 00:15:51.802 Test: underflow_for_request_sense_test ...[2024-05-16 07:28:45.318543] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:15:51.802 [2024-05-16 07:28:45.318570] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:15:51.802 [2024-05-16 07:28:45.318608] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:15:51.802 [2024-05-16 07:28:45.318630] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:15:51.802 [2024-05-16 07:28:45.318706] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:15:51.802 [2024-05-16 07:28:45.318731] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4557:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:15:51.802 passed 00:15:51.802 Test: underflow_for_check_condition_test ...passed 00:15:51.802 Test: add_transfer_task_test ...passed 00:15:51.802 Test: get_transfer_task_test ...passed 00:15:51.802 Test: del_transfer_task_test ...passed 00:15:51.802 Test: clear_all_transfer_tasks_test ...passed 00:15:51.802 Test: build_iovs_test ...passed 00:15:51.802 Test: build_iovs_with_md_test ...passed 00:15:51.802 Test: pdu_hdr_op_login_test ...[2024-05-16 07:28:45.319712] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:15:51.802 [2024-05-16 07:28:45.319798] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:15:51.802 passed 00:15:51.802 Test: pdu_hdr_op_text_test ...passed 00:15:51.802 Test: pdu_hdr_op_logout_test ...passed 00:15:51.802 Test: pdu_hdr_op_scsi_test ...[2024-05-16 07:28:45.319844] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:15:51.802 [2024-05-16 07:28:45.319892] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2247:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:15:51.802 [2024-05-16 07:28:45.319937] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:15:51.802 [2024-05-16 07:28:45.319971] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2292:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:15:51.802 [2024-05-16 07:28:45.320011] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2523:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:15:51.802 passed 00:15:51.802 Test: pdu_hdr_op_task_mgmt_test ...[2024-05-16 07:28:45.320063] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:15:51.802 [2024-05-16 07:28:45.320094] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:15:51.802 [2024-05-16 07:28:45.320125] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:15:51.802 [2024-05-16 07:28:45.320170] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:15:51.802 [2024-05-16 07:28:45.320212] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3411:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:15:51.802 [2024-05-16 07:28:45.320254] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:15:51.802 [2024-05-16 07:28:45.320293] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:15:51.802 passed 00:15:51.802 Test: pdu_hdr_op_nopout_test ...[2024-05-16 07:28:45.320327] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:15:51.802 [2024-05-16 07:28:45.320369] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:15:51.802 [2024-05-16 07:28:45.320421] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:15:51.802 [2024-05-16 07:28:45.320460] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:15:51.802 [2024-05-16 07:28:45.320507] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:15:51.802 passed 00:15:51.802 Test: pdu_hdr_op_data_test ...[2024-05-16 07:28:45.320556] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:15:51.802 [2024-05-16 07:28:45.320612] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:15:51.802 [2024-05-16 07:28:45.320645] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:15:51.802 [2024-05-16 07:28:45.320702] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4223:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:15:51.802 [2024-05-16 07:28:45.320736] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:15:51.802 passed 00:15:51.802 Test: empty_text_with_cbit_test ...passed 00:15:51.802 Test: pdu_payload_read_test ...[2024-05-16 07:28:45.320768] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:15:51.802 [2024-05-16 07:28:45.320813] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:15:51.802 [2024-05-16 07:28:45.321983] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4638:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:15:51.802 passed 00:15:51.802 Test: data_out_pdu_sequence_test ...passed 00:15:51.802 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:15:51.802 00:15:51.802 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.802 suites 1 1 n/a 0 0 00:15:51.802 tests 24 24 24 0 0 00:15:51.802 asserts 150253 150253 150253 0 n/a 00:15:51.802 00:15:51.802 Elapsed time = 0.008 seconds 00:15:51.802 07:28:45 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:15:51.802 00:15:51.802 00:15:51.803 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.803 http://cunit.sourceforge.net/ 00:15:51.803 00:15:51.803 00:15:51.803 Suite: init_grp_suite 00:15:51.803 Test: create_initiator_group_success_case ...passed 00:15:51.803 Test: find_initiator_group_success_case ...passed 00:15:51.803 Test: register_initiator_group_twice_case ...passed 00:15:51.803 Test: add_initiator_name_success_case ...passed 00:15:51.803 Test: add_initiator_name_fail_case ...[2024-05-16 07:28:45.334415] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:15:51.803 passed 00:15:51.803 Test: delete_all_initiator_names_success_case ...passed 00:15:51.803 Test: add_netmask_success_case ...passed 00:15:51.803 Test: add_netmask_fail_case ...passed 00:15:51.803 Test: delete_all_netmasks_success_case ...passed 00:15:51.803 Test: initiator_name_overwrite_all_to_any_case ...passed 00:15:51.803 Test: netmask_overwrite_all_to_any_case ...passed 00:15:51.803 Test: add_delete_initiator_names_case ...passed 00:15:51.803 Test: add_duplicated_initiator_names_case ...passed 00:15:51.803 Test: delete_nonexisting_initiator_names_case ...[2024-05-16 07:28:45.334679] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:15:51.803 passed 00:15:51.803 Test: add_delete_netmasks_case ...passed 00:15:51.803 Test: add_duplicated_netmasks_case ...passed 00:15:51.803 Test: delete_nonexisting_netmasks_case ...passed 00:15:51.803 00:15:51.803 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.803 suites 1 1 n/a 0 0 00:15:51.803 tests 17 17 17 0 0 00:15:51.803 asserts 108 108 108 0 n/a 00:15:51.803 00:15:51.803 Elapsed time = 0.000 seconds 00:15:51.803 07:28:45 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:15:51.803 00:15:51.803 00:15:51.803 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.803 http://cunit.sourceforge.net/ 00:15:51.803 00:15:51.803 00:15:51.803 Suite: portal_grp_suite 00:15:51.803 Test: portal_create_ipv4_normal_case ...passed 00:15:51.803 Test: portal_create_ipv6_normal_case ...passed 00:15:51.803 Test: portal_create_ipv4_wildcard_case ...passed 00:15:51.803 Test: portal_create_ipv6_wildcard_case ...passed 00:15:51.803 Test: portal_create_twice_case ...passed 00:15:51.803 Test: portal_grp_register_unregister_case ...passed 00:15:51.803 Test: portal_grp_register_twice_case ...passed 00:15:51.803 Test: portal_grp_add_delete_case ...[2024-05-16 07:28:45.341139] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:15:51.803 passed 00:15:51.803 Test: portal_grp_add_delete_twice_case ...passed 00:15:51.803 00:15:51.803 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.803 suites 1 1 n/a 0 0 00:15:51.803 tests 9 9 9 0 0 00:15:51.803 asserts 44 44 44 0 n/a 00:15:51.803 00:15:51.803 Elapsed time = 0.000 seconds 00:15:51.803 00:15:51.803 real 0m0.049s 00:15:51.803 user 0m0.027s 00:15:51.803 sys 0m0.039s 00:15:51.803 07:28:45 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.803 ************************************ 00:15:51.803 END TEST unittest_iscsi 00:15:51.803 ************************************ 00:15:51.803 07:28:45 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:15:52.062 07:28:45 unittest -- unit/unittest.sh@244 -- # run_test unittest_json unittest_json 00:15:52.062 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.062 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.062 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:52.062 ************************************ 00:15:52.062 START TEST unittest_json 00:15:52.062 ************************************ 00:15:52.062 07:28:45 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:15:52.062 07:28:45 unittest.unittest_json -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:15:52.062 00:15:52.062 00:15:52.062 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.062 http://cunit.sourceforge.net/ 00:15:52.062 00:15:52.062 00:15:52.062 Suite: json 00:15:52.062 Test: test_parse_literal ...passed 00:15:52.062 Test: test_parse_string_simple ...passed 00:15:52.062 Test: test_parse_string_control_chars ...passed 00:15:52.062 Test: test_parse_string_utf8 ...passed 00:15:52.062 Test: test_parse_string_escapes_twochar ...passed 00:15:52.062 Test: test_parse_string_escapes_unicode ...passed 00:15:52.062 Test: test_parse_number ...passed 00:15:52.062 Test: test_parse_array ...passed 00:15:52.062 Test: test_parse_object ...passed 00:15:52.062 Test: test_parse_nesting ...passed 00:15:52.062 Test: test_parse_comment ...passed 00:15:52.062 00:15:52.062 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.062 suites 1 1 n/a 0 0 00:15:52.062 tests 11 11 11 0 0 00:15:52.062 asserts 1516 1516 1516 0 n/a 00:15:52.062 00:15:52.062 Elapsed time = 0.000 seconds 00:15:52.062 07:28:45 unittest.unittest_json -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:15:52.062 00:15:52.062 00:15:52.062 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.062 http://cunit.sourceforge.net/ 00:15:52.062 00:15:52.062 00:15:52.062 Suite: json 00:15:52.062 Test: test_strequal ...passed 00:15:52.062 Test: test_num_to_uint16 ...passed 00:15:52.062 Test: test_num_to_int32 ...passed 00:15:52.062 Test: test_num_to_uint64 ...passed 00:15:52.062 Test: test_decode_object ...passed 00:15:52.062 Test: test_decode_array ...passed 00:15:52.062 Test: test_decode_bool ...passed 00:15:52.062 Test: test_decode_uint16 ...passed 00:15:52.062 Test: test_decode_int32 ...passed 00:15:52.062 Test: test_decode_uint32 ...passed 00:15:52.062 Test: test_decode_uint64 ...passed 00:15:52.062 Test: test_decode_string ...passed 00:15:52.062 Test: test_decode_uuid ...passed 00:15:52.062 Test: test_find ...passed 00:15:52.062 Test: test_find_array ...passed 00:15:52.062 Test: test_iterating ...passed 00:15:52.062 Test: test_free_object ...passed 00:15:52.062 00:15:52.062 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.062 suites 1 1 n/a 0 0 00:15:52.062 tests 17 17 17 0 0 00:15:52.062 asserts 236 236 236 0 n/a 00:15:52.062 00:15:52.062 Elapsed time = 0.000 seconds 00:15:52.062 07:28:45 unittest.unittest_json -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:15:52.062 00:15:52.062 00:15:52.062 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.062 http://cunit.sourceforge.net/ 00:15:52.062 00:15:52.062 00:15:52.062 Suite: json 00:15:52.062 Test: test_write_literal ...passed 00:15:52.062 Test: test_write_string_simple ...passed 00:15:52.062 Test: test_write_string_escapes ...passed 00:15:52.062 Test: test_write_string_utf16le ...passed 00:15:52.062 Test: test_write_number_int32 ...passed 00:15:52.062 Test: test_write_number_uint32 ...passed 00:15:52.062 Test: test_write_number_uint128 ...passed 00:15:52.062 Test: test_write_string_number_uint128 ...passed 00:15:52.062 Test: test_write_number_int64 ...passed 00:15:52.062 Test: test_write_number_uint64 ...passed 00:15:52.062 Test: test_write_number_double ...passed 00:15:52.062 Test: test_write_uuid ...passed 00:15:52.062 Test: test_write_array ...passed 00:15:52.062 Test: test_write_object ...passed 00:15:52.062 Test: test_write_nesting ...passed 00:15:52.062 Test: test_write_val ...passed 00:15:52.062 00:15:52.062 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.062 suites 1 1 n/a 0 0 00:15:52.062 tests 16 16 16 0 0 00:15:52.062 asserts 918 918 918 0 n/a 00:15:52.062 00:15:52.062 Elapsed time = 0.000 seconds 00:15:52.062 07:28:45 unittest.unittest_json -- unit/unittest.sh@79 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:15:52.062 00:15:52.062 00:15:52.062 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.062 http://cunit.sourceforge.net/ 00:15:52.062 00:15:52.062 00:15:52.062 Suite: jsonrpc 00:15:52.062 Test: test_parse_request ...passed 00:15:52.063 Test: test_parse_request_streaming ...passed 00:15:52.063 00:15:52.063 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.063 suites 1 1 n/a 0 0 00:15:52.063 tests 2 2 2 0 0 00:15:52.063 asserts 289 289 289 0 n/a 00:15:52.063 00:15:52.063 Elapsed time = 0.000 seconds 00:15:52.063 00:15:52.063 real 0m0.024s 00:15:52.063 user 0m0.015s 00:15:52.063 sys 0m0.015s 00:15:52.063 07:28:45 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.063 ************************************ 00:15:52.063 END TEST unittest_json 00:15:52.063 ************************************ 00:15:52.063 07:28:45 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 07:28:45 unittest -- unit/unittest.sh@245 -- # run_test unittest_rpc unittest_rpc 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 ************************************ 00:15:52.063 START TEST unittest_rpc 00:15:52.063 ************************************ 00:15:52.063 07:28:45 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:15:52.063 07:28:45 unittest.unittest_rpc -- unit/unittest.sh@83 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:15:52.063 00:15:52.063 00:15:52.063 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.063 http://cunit.sourceforge.net/ 00:15:52.063 00:15:52.063 00:15:52.063 Suite: rpc 00:15:52.063 Test: test_jsonrpc_handler ...passed 00:15:52.063 Test: test_spdk_rpc_is_method_allowed ...passed 00:15:52.063 Test: test_rpc_get_methods ...[2024-05-16 07:28:45.436950] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:15:52.063 passed 00:15:52.063 Test: test_rpc_spdk_get_version ...passed 00:15:52.063 Test: test_spdk_rpc_listen_close ...passed 00:15:52.063 Test: test_rpc_run_multiple_servers ...passed 00:15:52.063 00:15:52.063 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.063 suites 1 1 n/a 0 0 00:15:52.063 tests 6 6 6 0 0 00:15:52.063 asserts 23 23 23 0 n/a 00:15:52.063 00:15:52.063 Elapsed time = 0.000 seconds 00:15:52.063 00:15:52.063 real 0m0.006s 00:15:52.063 user 0m0.005s 00:15:52.063 sys 0m0.004s 00:15:52.063 07:28:45 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.063 07:28:45 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 ************************************ 00:15:52.063 END TEST unittest_rpc 00:15:52.063 ************************************ 00:15:52.063 07:28:45 unittest -- unit/unittest.sh@246 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 ************************************ 00:15:52.063 START TEST unittest_notify 00:15:52.063 ************************************ 00:15:52.063 07:28:45 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:15:52.063 00:15:52.063 00:15:52.063 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.063 http://cunit.sourceforge.net/ 00:15:52.063 00:15:52.063 00:15:52.063 Suite: app_suite 00:15:52.063 Test: notify ...passed 00:15:52.063 00:15:52.063 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.063 suites 1 1 n/a 0 0 00:15:52.063 tests 1 1 1 0 0 00:15:52.063 asserts 13 13 13 0 n/a 00:15:52.063 00:15:52.063 Elapsed time = 0.000 seconds 00:15:52.063 00:15:52.063 real 0m0.005s 00:15:52.063 user 0m0.004s 00:15:52.063 sys 0m0.004s 00:15:52.063 07:28:45 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.063 07:28:45 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 ************************************ 00:15:52.063 END TEST unittest_notify 00:15:52.063 ************************************ 00:15:52.063 07:28:45 unittest -- unit/unittest.sh@247 -- # run_test unittest_nvme unittest_nvme 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.063 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 ************************************ 00:15:52.063 START TEST unittest_nvme 00:15:52.063 ************************************ 00:15:52.063 07:28:45 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:15:52.063 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:15:52.063 00:15:52.063 00:15:52.063 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.063 http://cunit.sourceforge.net/ 00:15:52.063 00:15:52.063 00:15:52.063 Suite: nvme 00:15:52.063 Test: test_opc_data_transfer ...passed 00:15:52.063 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:15:52.063 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:15:52.063 Test: test_trid_parse_and_compare ...[2024-05-16 07:28:45.517308] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:15:52.063 [2024-05-16 07:28:45.517503] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:15:52.063 [2024-05-16 07:28:45.517522] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1189:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:15:52.063 [2024-05-16 07:28:45.517536] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:15:52.063 passed 00:15:52.063 Test: test_trid_trtype_str ...passed 00:15:52.063 Test: test_trid_adrfam_str ...passed 00:15:52.063 Test: test_nvme_ctrlr_probe ...passed 00:15:52.063 Test: test_spdk_nvme_probe ...passed 00:15:52.063 Test: test_spdk_nvme_connect ...[2024-05-16 07:28:45.517550] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:15:52.063 [2024-05-16 07:28:45.517561] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:15:52.063 [2024-05-16 07:28:45.517676] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:15:52.063 [2024-05-16 07:28:45.517705] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:15:52.063 [2024-05-16 07:28:45.517722] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:15:52.063 [2024-05-16 07:28:45.517737] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:15:52.063 [2024-05-16 07:28:45.517750] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:15:52.063 [2024-05-16 07:28:45.517772] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:15:52.063 passed 00:15:52.063 Test: test_nvme_ctrlr_probe_internal ...passed 00:15:52.063 Test: test_nvme_init_controllers ...passed 00:15:52.063 Test: test_nvme_driver_init ...[2024-05-16 07:28:45.517828] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:15:52.063 [2024-05-16 07:28:45.517841] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:15:52.063 [2024-05-16 07:28:45.517869] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:15:52.063 [2024-05-16 07:28:45.517882] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:52.063 [2024-05-16 07:28:45.517898] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:15:52.063 [2024-05-16 07:28:45.517926] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:15:52.063 [2024-05-16 07:28:45.517939] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:15:52.063 [2024-05-16 07:28:45.627529] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:15:52.063 passed 00:15:52.063 Test: test_spdk_nvme_detach ...passed 00:15:52.063 Test: test_nvme_completion_poll_cb ...passed 00:15:52.063 Test: test_nvme_user_copy_cmd_complete ...passed 00:15:52.063 Test: test_nvme_allocate_request_null ...passed 00:15:52.063 Test: test_nvme_allocate_request ...passed 00:15:52.063 Test: test_nvme_free_request ...passed 00:15:52.063 Test: test_nvme_allocate_request_user_copy ...passed 00:15:52.063 Test: test_nvme_robust_mutex_init_shared ...passed 00:15:52.063 Test: test_nvme_request_check_timeout ...passed 00:15:52.063 Test: test_nvme_wait_for_completion ...passed 00:15:52.063 Test: test_spdk_nvme_parse_func ...passed 00:15:52.063 Test: test_spdk_nvme_detach_async ...passed 00:15:52.063 Test: test_nvme_parse_addr ...passed 00:15:52.063 00:15:52.063 [2024-05-16 07:28:45.627856] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:15:52.063 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.063 suites 1 1 n/a 0 0 00:15:52.063 tests 25 25 25 0 0 00:15:52.063 asserts 326 326 326 0 n/a 00:15:52.063 00:15:52.063 Elapsed time = 0.000 seconds 00:15:52.324 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:15:52.324 00:15:52.324 00:15:52.324 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.324 http://cunit.sourceforge.net/ 00:15:52.324 00:15:52.324 00:15:52.324 Suite: nvme_ctrlr 00:15:52.324 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-16 07:28:45.639579] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.324 passed 00:15:52.324 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-16 07:28:45.641116] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.324 passed 00:15:52.324 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-16 07:28:45.642267] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.324 passed 00:15:52.324 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-16 07:28:45.643504] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.324 passed 00:15:52.324 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-16 07:28:45.644786] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.645987] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-16 07:28:45.647191] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-16 07:28:45.648425] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:15:52.325 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-16 07:28:45.650775] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.653070] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-16 07:28:45.654252] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:15:52.325 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-16 07:28:45.656537] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.657658] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-16 07:28:45.659860] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:15:52.325 Test: test_nvme_ctrlr_init_delay ...[2024-05-16 07:28:45.662094] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_alloc_io_qpair_rr_1 ...[2024-05-16 07:28:45.663221] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.663279] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:15:52.325 [2024-05-16 07:28:45.663298] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:15:52.325 passed 00:15:52.325 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:15:52.325 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:15:52.325 Test: test_alloc_io_qpair_wrr_1 ...passed 00:15:52.325 Test: test_alloc_io_qpair_wrr_2 ...passed 00:15:52.325 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-05-16 07:28:45.663310] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:15:52.325 [2024-05-16 07:28:45.663322] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:15:52.325 [2024-05-16 07:28:45.663384] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.663410] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.663426] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:15:52.325 [2024-05-16 07:28:45.663459] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:15:52.325 [2024-05-16 07:28:45.663477] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:15:52.325 [2024-05-16 07:28:45.663490] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:15:52.325 [2024-05-16 07:28:45.663502] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_fail ...passed 00:15:52.325 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:15:52.325 Test: test_nvme_ctrlr_set_supported_features ...passed 00:15:52.325 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:15:52.325 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-16 07:28:45.663532] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:15:52.325 [2024-05-16 07:28:45.663587] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:15:52.325 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:15:52.325 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:15:52.325 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-16 07:28:45.705164] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-16 07:28:45.711844] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-16 07:28:45.712994] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.713027] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2884:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:15:52.325 passed 00:15:52.325 Test: test_alloc_io_qpair_fail ...[2024-05-16 07:28:45.714149] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 [2024-05-16 07:28:45.714169] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_add_remove_process ...passed 00:15:52.325 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:15:52.325 Test: test_nvme_ctrlr_set_state ...passed[2024-05-16 07:28:45.714206] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1479:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:15:52.325 00:15:52.325 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-16 07:28:45.714221] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-16 07:28:45.717852] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-16 07:28:45.725194] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_reset ...[2024-05-16 07:28:45.726342] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_aer_callback ...[2024-05-16 07:28:45.726395] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-16 07:28:45.727527] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:15:52.325 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:15:52.325 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-16 07:28:45.728787] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:15:52.325 Test: test_nvme_ctrlr_ana_resize ...[2024-05-16 07:28:45.729969] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:15:52.325 Test: test_nvme_transport_ctrlr_ready ...[2024-05-16 07:28:45.731150] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:15:52.325 passed 00:15:52.325 Test: test_nvme_ctrlr_disable ...[2024-05-16 07:28:45.731183] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4081:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:15:52.325 [2024-05-16 07:28:45.731199] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:15:52.325 passed 00:15:52.325 00:15:52.325 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.325 suites 1 1 n/a 0 0 00:15:52.325 tests 43 43 43 0 0 00:15:52.325 asserts 10418 10418 10418 0 n/a 00:15:52.325 00:15:52.325 Elapsed time = 0.047 seconds 00:15:52.325 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:15:52.325 00:15:52.325 00:15:52.325 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.325 http://cunit.sourceforge.net/ 00:15:52.325 00:15:52.325 00:15:52.325 Suite: nvme_ctrlr_cmd 00:15:52.325 Test: test_get_log_pages ...passed 00:15:52.325 Test: test_set_feature_cmd ...passed 00:15:52.325 Test: test_set_feature_ns_cmd ...passed 00:15:52.325 Test: test_get_feature_cmd ...passed 00:15:52.325 Test: test_get_feature_ns_cmd ...passed 00:15:52.325 Test: test_abort_cmd ...passed 00:15:52.325 Test: test_set_host_id_cmds ...passed 00:15:52.325 Test: test_io_cmd_raw_no_payload_build ...passed 00:15:52.325 Test: test_io_raw_cmd ...passed 00:15:52.325 Test: test_io_raw_cmd_with_md ...passed 00:15:52.325 Test: test_namespace_attach ...passed 00:15:52.325 Test: test_namespace_detach ...passed[2024-05-16 07:28:45.740960] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:15:52.325 00:15:52.325 Test: test_namespace_create ...passed 00:15:52.325 Test: test_namespace_delete ...passed 00:15:52.325 Test: test_doorbell_buffer_config ...passed 00:15:52.325 Test: test_format_nvme ...passed 00:15:52.325 Test: test_fw_commit ...passed 00:15:52.325 Test: test_fw_image_download ...passed 00:15:52.325 Test: test_sanitize ...passed 00:15:52.325 Test: test_directive ...passed 00:15:52.325 Test: test_nvme_request_add_abort ...passed 00:15:52.325 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:15:52.325 Test: test_nvme_ctrlr_cmd_identify ...passed 00:15:52.325 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:15:52.325 00:15:52.325 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.325 suites 1 1 n/a 0 0 00:15:52.325 tests 24 24 24 0 0 00:15:52.325 asserts 198 198 198 0 n/a 00:15:52.325 00:15:52.325 Elapsed time = 0.000 seconds 00:15:52.325 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:15:52.325 00:15:52.325 00:15:52.325 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.325 http://cunit.sourceforge.net/ 00:15:52.326 00:15:52.326 00:15:52.326 Suite: nvme_ctrlr_cmd 00:15:52.326 Test: test_geometry_cmd ...passed 00:15:52.326 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:15:52.326 00:15:52.326 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.326 suites 1 1 n/a 0 0 00:15:52.326 tests 2 2 2 0 0 00:15:52.326 asserts 7 7 7 0 n/a 00:15:52.326 00:15:52.326 Elapsed time = 0.000 seconds 00:15:52.326 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:15:52.326 00:15:52.326 00:15:52.326 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.326 http://cunit.sourceforge.net/ 00:15:52.326 00:15:52.326 00:15:52.326 Suite: nvme 00:15:52.326 Test: test_nvme_ns_construct ...passed 00:15:52.326 Test: test_nvme_ns_uuid ...passed 00:15:52.326 Test: test_nvme_ns_csi ...passed 00:15:52.326 Test: test_nvme_ns_data ...passed 00:15:52.326 Test: test_nvme_ns_set_identify_data ...passed 00:15:52.326 Test: test_spdk_nvme_ns_get_values ...passed 00:15:52.326 Test: test_spdk_nvme_ns_is_active ...passed 00:15:52.326 Test: spdk_nvme_ns_supports ...passed 00:15:52.326 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:15:52.326 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:15:52.326 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:15:52.326 Test: test_nvme_ns_find_id_desc ...passed 00:15:52.326 00:15:52.326 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.326 suites 1 1 n/a 0 0 00:15:52.326 tests 12 12 12 0 0 00:15:52.326 asserts 83 83 83 0 n/a 00:15:52.326 00:15:52.326 Elapsed time = 0.000 seconds 00:15:52.326 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:15:52.326 00:15:52.326 00:15:52.326 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.326 http://cunit.sourceforge.net/ 00:15:52.326 00:15:52.326 00:15:52.326 Suite: nvme_ns_cmd 00:15:52.326 Test: split_test ...passed 00:15:52.326 Test: split_test2 ...passed 00:15:52.326 Test: split_test3 ...passed 00:15:52.326 Test: split_test4 ...passed 00:15:52.326 Test: test_nvme_ns_cmd_flush ...passed 00:15:52.326 Test: test_nvme_ns_cmd_dataset_management ...passed 00:15:52.326 Test: test_nvme_ns_cmd_copy ...passed 00:15:52.326 Test: test_io_flags ...[2024-05-16 07:28:45.756728] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:15:52.326 passed 00:15:52.326 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:15:52.326 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:15:52.326 Test: test_nvme_ns_cmd_reservation_register ...passed 00:15:52.326 Test: test_nvme_ns_cmd_reservation_release ...passed 00:15:52.326 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:15:52.326 Test: test_nvme_ns_cmd_reservation_report ...passed 00:15:52.326 Test: test_cmd_child_request ...passed 00:15:52.326 Test: test_nvme_ns_cmd_readv ...passed 00:15:52.326 Test: test_nvme_ns_cmd_read_with_md ...passed 00:15:52.326 Test: test_nvme_ns_cmd_writev ...passed 00:15:52.326 Test: test_nvme_ns_cmd_write_with_md ...passed 00:15:52.326 Test: test_nvme_ns_cmd_zone_append_with_md ...[2024-05-16 07:28:45.756992] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:15:52.326 passed 00:15:52.326 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:15:52.326 Test: test_nvme_ns_cmd_comparev ...passed 00:15:52.326 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:15:52.326 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:15:52.326 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:15:52.326 Test: test_nvme_ns_cmd_setup_request ...passed 00:15:52.326 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:15:52.326 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:15:52.326 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-05-16 07:28:45.757120] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:15:52.326 passed 00:15:52.326 Test: test_nvme_ns_cmd_verify ...passed 00:15:52.326 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:15:52.326 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:15:52.326 00:15:52.326 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.326 suites 1 1 n/a 0 0 00:15:52.326 tests 32 32 32 0 0 00:15:52.326 asserts 550 550 550 0 n/a 00:15:52.326 00:15:52.326 Elapsed time = 0.000 seconds 00:15:52.326 [2024-05-16 07:28:45.757138] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:15:52.326 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:15:52.326 00:15:52.326 00:15:52.326 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.326 http://cunit.sourceforge.net/ 00:15:52.326 00:15:52.326 00:15:52.326 Suite: nvme_ns_cmd 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:15:52.326 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:15:52.326 00:15:52.326 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.326 suites 1 1 n/a 0 0 00:15:52.326 tests 12 12 12 0 0 00:15:52.326 asserts 123 123 123 0 n/a 00:15:52.326 00:15:52.326 Elapsed time = 0.000 seconds 00:15:52.326 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:15:52.326 00:15:52.326 00:15:52.326 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.326 http://cunit.sourceforge.net/ 00:15:52.326 00:15:52.326 00:15:52.326 Suite: nvme_qpair 00:15:52.326 Test: test3 ...passed 00:15:52.326 Test: test_ctrlr_failed ...passed 00:15:52.326 Test: struct_packing ...passed 00:15:52.326 Test: test_nvme_qpair_process_completions ...[2024-05-16 07:28:45.767976] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:52.326 [2024-05-16 07:28:45.768379] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:52.326 passed 00:15:52.326 Test: test_nvme_completion_is_retry ...passed 00:15:52.326 Test: test_get_status_string ...passed 00:15:52.326 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:15:52.326 Test: test_nvme_qpair_submit_request ...passed 00:15:52.326 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:15:52.326 Test: test_nvme_qpair_manual_complete_request ...passed 00:15:52.326 Test: test_nvme_qpair_init_deinit ...passed[2024-05-16 07:28:45.768479] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:15:52.326 [2024-05-16 07:28:45.768503] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:15:52.326 [2024-05-16 07:28:45.768570] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:52.326 00:15:52.326 Test: test_nvme_get_sgl_print_info ...passed 00:15:52.326 00:15:52.326 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.326 suites 1 1 n/a 0 0 00:15:52.326 tests 12 12 12 0 0 00:15:52.326 asserts 154 154 154 0 n/a 00:15:52.326 00:15:52.326 Elapsed time = 0.000 seconds 00:15:52.326 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:15:52.326 00:15:52.326 00:15:52.326 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.326 http://cunit.sourceforge.net/ 00:15:52.326 00:15:52.326 00:15:52.326 Suite: nvme_pcie 00:15:52.326 Test: test_prp_list_append ...passed 00:15:52.326 Test: test_nvme_pcie_hotplug_monitor ...passed 00:15:52.326 Test: test_shadow_doorbell_update ...passed 00:15:52.326 Test: test_build_contig_hw_sgl_request ...passed 00:15:52.326 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:15:52.326 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed[2024-05-16 07:28:45.773382] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:15:52.326 [2024-05-16 07:28:45.773551] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:15:52.326 [2024-05-16 07:28:45.773565] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:15:52.326 [2024-05-16 07:28:45.773603] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:15:52.326 [2024-05-16 07:28:45.773622] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:15:52.326 00:15:52.326 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:15:52.326 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-05-16 07:28:45.773724] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:15:52.326 passed 00:15:52.326 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:15:52.326 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:15:52.326 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:15:52.326 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:15:52.326 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:15:52.326 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:15:52.326 00:15:52.326 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.326 suites 1 1 n/a 0 0 00:15:52.326 tests 14 14 14 0 0 00:15:52.326 asserts 235 235 235 0 n/a 00:15:52.326 00:15:52.326 Elapsed time = 0.000 seconds 00:15:52.327 [2024-05-16 07:28:45.773748] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:15:52.327 [2024-05-16 07:28:45.773762] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:15:52.327 [2024-05-16 07:28:45.773775] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:15:52.327 [2024-05-16 07:28:45.773787] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:15:52.327 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:15:52.327 00:15:52.327 00:15:52.327 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.327 http://cunit.sourceforge.net/ 00:15:52.327 00:15:52.327 00:15:52.327 Suite: nvme_ns_cmd 00:15:52.327 Test: nvme_poll_group_create_test ...passed 00:15:52.327 Test: nvme_poll_group_add_remove_test ...passed 00:15:52.327 Test: nvme_poll_group_process_completions ...passed 00:15:52.327 Test: nvme_poll_group_destroy_test ...passed 00:15:52.327 Test: nvme_poll_group_get_free_stats ...passed 00:15:52.327 00:15:52.327 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.327 suites 1 1 n/a 0 0 00:15:52.327 tests 5 5 5 0 0 00:15:52.327 asserts 75 75 75 0 n/a 00:15:52.327 00:15:52.327 Elapsed time = 0.000 seconds 00:15:52.327 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:15:52.327 00:15:52.327 00:15:52.327 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.327 http://cunit.sourceforge.net/ 00:15:52.327 00:15:52.327 00:15:52.327 Suite: nvme_quirks 00:15:52.327 Test: test_nvme_quirks_striping ...passed 00:15:52.327 00:15:52.327 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.327 suites 1 1 n/a 0 0 00:15:52.327 tests 1 1 1 0 0 00:15:52.327 asserts 5 5 5 0 n/a 00:15:52.327 00:15:52.327 Elapsed time = 0.000 seconds 00:15:52.327 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:15:52.327 00:15:52.327 00:15:52.327 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.327 http://cunit.sourceforge.net/ 00:15:52.327 00:15:52.327 00:15:52.327 Suite: nvme_tcp 00:15:52.327 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:15:52.327 Test: test_nvme_tcp_build_iovs ...passed 00:15:52.327 Test: test_nvme_tcp_build_sgl_request ...passed 00:15:52.327 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:15:52.327 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:15:52.327 Test: test_nvme_tcp_req_complete_safe ...passed 00:15:52.327 Test: test_nvme_tcp_req_get ...passed 00:15:52.327 Test: test_nvme_tcp_req_init ...passed 00:15:52.327 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:15:52.327 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:15:52.327 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:15:52.327 Test: test_nvme_tcp_alloc_reqs ...passed 00:15:52.327 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:15:52.327 Test: test_nvme_tcp_pdu_ch_handle ...[2024-05-16 07:28:45.788810] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8202a6a78, and the iovcnt=16, remaining_size=28672 00:15:52.327 [2024-05-16 07:28:45.789101] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(6) to be set 00:15:52.327 [2024-05-16 07:28:45.789145] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789173] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8202a7d98 00:15:52.327 [2024-05-16 07:28:45.789186] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1227:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:15:52.327 [2024-05-16 07:28:45.789198] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789218] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:15:52.327 [2024-05-16 07:28:45.789233] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_qpair_connect_sock ...[2024-05-16 07:28:45.789251] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:15:52.327 [2024-05-16 07:28:45.789263] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789280] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789295] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789314] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789328] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789344] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.789404] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:15:52.327 [2024-05-16 07:28:45.789420] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:15:52.327 [2024-05-16 07:28:45.792824] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:15:52.327 Test: test_nvme_tcp_c2h_payload_handle ...[2024-05-16 07:28:45.792885] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8202a81d0): PDU Sequence Error 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_icresp_handle ...passed 00:15:52.327 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:15:52.327 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:15:52.327 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:15:52.327 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-05-16 07:28:45.792910] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:15:52.327 [2024-05-16 07:28:45.792930] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1575:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:15:52.327 [2024-05-16 07:28:45.792942] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.792954] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:15:52.327 [2024-05-16 07:28:45.792972] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.792988] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a8608 is same with the state(0) to be set 00:15:52.327 [2024-05-16 07:28:45.793008] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8202a81d0): PDU Sequence Error 00:15:52.327 [2024-05-16 07:28:45.793028] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8202a8608 00:15:52.327 [2024-05-16 07:28:45.793077] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8202a6368, errno=0, rc=0 00:15:52.327 [2024-05-16 07:28:45.793094] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a6368 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.793106] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8202a6368 is same with the state(5) to be set 00:15:52.327 [2024-05-16 07:28:45.793163] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8202a6368 (0): No error: 0 00:15:52.327 [2024-05-16 07:28:45.793178] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8202a6368 (0): No error: 0 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-05-16 07:28:45.865635] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:15:52.327 Test: test_nvme_tcp_poll_group_get_stats ...[2024-05-16 07:28:45.865693] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_ctrlr_construct ...[2024-05-16 07:28:45.865744] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:52.327 [2024-05-16 07:28:45.865764] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:52.327 [2024-05-16 07:28:45.865828] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:15:52.327 [2024-05-16 07:28:45.865842] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:52.327 [2024-05-16 07:28:45.865859] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:15:52.327 passed 00:15:52.327 Test: test_nvme_tcp_qpair_submit_request ...passed 00:15:52.327 00:15:52.327 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.327 suites 1 1 n/a 0 0 00:15:52.327 tests 27 27 27 0 0 00:15:52.327 asserts 624 624 624 0 n/a 00:15:52.327 00:15:52.327 Elapsed time = 0.070 seconds 00:15:52.327 [2024-05-16 07:28:45.865872] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:52.327 [2024-05-16 07:28:45.865891] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2375:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82c26e000 with addr=192.168.1.78, port=23 00:15:52.327 [2024-05-16 07:28:45.865903] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:52.328 [2024-05-16 07:28:45.865927] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82c241180, and the iovcnt=1, remaining_size=1024 00:15:52.328 [2024-05-16 07:28:45.865939] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:15:52.328 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:15:52.328 00:15:52.328 00:15:52.328 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.328 http://cunit.sourceforge.net/ 00:15:52.328 00:15:52.328 00:15:52.328 Suite: nvme_transport 00:15:52.328 Test: test_nvme_get_transport ...passed 00:15:52.328 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:15:52.328 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:15:52.328 Test: test_nvme_transport_poll_group_add_remove ...passed 00:15:52.328 Test: test_ctrlr_get_memory_domains ...passed 00:15:52.328 00:15:52.328 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.328 suites 1 1 n/a 0 0 00:15:52.328 tests 5 5 5 0 0 00:15:52.328 asserts 28 28 28 0 n/a 00:15:52.328 00:15:52.328 Elapsed time = 0.000 seconds 00:15:52.328 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:15:52.328 00:15:52.328 00:15:52.328 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.328 http://cunit.sourceforge.net/ 00:15:52.328 00:15:52.328 00:15:52.328 Suite: nvme_io_msg 00:15:52.328 Test: test_nvme_io_msg_send ...passed 00:15:52.328 Test: test_nvme_io_msg_process ...passed 00:15:52.328 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:15:52.328 00:15:52.328 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.328 suites 1 1 n/a 0 0 00:15:52.328 tests 3 3 3 0 0 00:15:52.328 asserts 56 56 56 0 n/a 00:15:52.328 00:15:52.328 Elapsed time = 0.000 seconds 00:15:52.328 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:15:52.328 00:15:52.328 00:15:52.328 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.328 http://cunit.sourceforge.net/ 00:15:52.328 00:15:52.328 00:15:52.328 Suite: nvme_pcie_common 00:15:52.328 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:15:52.328 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:15:52.328 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:15:52.328 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:15:52.328 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:15:52.328 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:15:52.328 00:15:52.328 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.328 suites 1 1 n/a 0 0 00:15:52.328 tests 6 6 6 0 0 00:15:52.328 asserts 148 148 148 0 n/a 00:15:52.328 00:15:52.328 Elapsed time = 0.000 seconds 00:15:52.328 [2024-05-16 07:28:45.889562] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:15:52.328 [2024-05-16 07:28:45.889802] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:15:52.328 [2024-05-16 07:28:45.889821] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:15:52.328 [2024-05-16 07:28:45.889842] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:15:52.328 [2024-05-16 07:28:45.889956] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:52.328 [2024-05-16 07:28:45.889967] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:52.587 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:15:52.587 00:15:52.587 00:15:52.587 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.587 http://cunit.sourceforge.net/ 00:15:52.587 00:15:52.587 00:15:52.587 Suite: nvme_fabric 00:15:52.587 Test: test_nvme_fabric_prop_set_cmd ...passed 00:15:52.587 Test: test_nvme_fabric_prop_get_cmd ...passed 00:15:52.587 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:15:52.587 Test: test_nvme_fabric_discover_probe ...passed 00:15:52.587 Test: test_nvme_fabric_qpair_connect ...passed 00:15:52.587 00:15:52.587 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.587 suites 1 1 n/a 0 0 00:15:52.587 tests 5 5 5 0 0 00:15:52.587 asserts 60 60 60 0 n/a 00:15:52.587 00:15:52.587 Elapsed time = 0.000 seconds 00:15:52.587 [2024-05-16 07:28:45.895143] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:15:52.587 07:28:45 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:15:52.587 00:15:52.587 00:15:52.587 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.587 http://cunit.sourceforge.net/ 00:15:52.587 00:15:52.587 00:15:52.587 Suite: nvme_opal 00:15:52.587 Test: test_opal_nvme_security_recv_send_done ...passed 00:15:52.587 Test: test_opal_add_short_atom_header ...passed 00:15:52.587 00:15:52.587 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.587 suites 1 1 n/a 0 0 00:15:52.587 tests 2 2 2 0 0 00:15:52.587 asserts 22 22 22 0 n/a 00:15:52.587 00:15:52.587 Elapsed time = 0.000 seconds 00:15:52.587 ************************************ 00:15:52.587 END TEST unittest_nvme 00:15:52.587 ************************************ 00:15:52.587 [2024-05-16 07:28:45.898805] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:15:52.587 00:15:52.587 real 0m0.387s 00:15:52.587 user 0m0.097s 00:15:52.587 sys 0m0.137s 00:15:52.587 07:28:45 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.587 07:28:45 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.587 07:28:45 unittest -- unit/unittest.sh@248 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:15:52.587 07:28:45 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:52.587 07:28:45 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.587 07:28:45 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:52.587 ************************************ 00:15:52.587 START TEST unittest_log 00:15:52.587 ************************************ 00:15:52.587 07:28:45 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:15:52.587 00:15:52.587 00:15:52.587 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.587 http://cunit.sourceforge.net/ 00:15:52.587 00:15:52.587 00:15:52.587 Suite: log 00:15:52.587 Test: log_test ...passed 00:15:52.587 Test: deprecation ...[2024-05-16 07:28:45.942106] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:15:52.587 [2024-05-16 07:28:45.942352] log_ut.c: 57:log_test: *DEBUG*: log test 00:15:52.587 log dump test: 00:15:52.587 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:15:52.587 spdk dump test: 00:15:52.587 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:15:52.587 spdk dump test: 00:15:52.587 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:15:52.587 00000010 65 20 63 68 61 72 73 e chars 00:15:53.549 passed 00:15:53.549 00:15:53.549 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.549 suites 1 1 n/a 0 0 00:15:53.549 tests 2 2 2 0 0 00:15:53.549 asserts 73 73 73 0 n/a 00:15:53.549 00:15:53.549 Elapsed time = 0.000 seconds 00:15:53.549 00:15:53.549 real 0m1.067s 00:15:53.549 user 0m0.006s 00:15:53.549 sys 0m0.005s 00:15:53.549 07:28:47 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.549 ************************************ 00:15:53.549 END TEST unittest_log 00:15:53.549 ************************************ 00:15:53.549 07:28:47 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:15:53.549 07:28:47 unittest -- unit/unittest.sh@249 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:15:53.549 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.549 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.549 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.549 ************************************ 00:15:53.549 START TEST unittest_lvol 00:15:53.549 ************************************ 00:15:53.549 07:28:47 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:15:53.549 00:15:53.549 00:15:53.549 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.549 http://cunit.sourceforge.net/ 00:15:53.549 00:15:53.549 00:15:53.549 Suite: lvol 00:15:53.549 Test: lvs_init_unload_success ...[2024-05-16 07:28:47.047683] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:15:53.549 passed 00:15:53.549 Test: lvs_init_destroy_success ...passed 00:15:53.549 Test: lvs_init_opts_success ...passed 00:15:53.549 Test: lvs_unload_lvs_is_null_fail ...passed 00:15:53.549 Test: lvs_names ...passed 00:15:53.549 Test: lvol_create_destroy_success ...passed 00:15:53.549 Test: lvol_create_fail ...passed 00:15:53.549 Test: lvol_destroy_fail ...passed 00:15:53.549 Test: lvol_close ...[2024-05-16 07:28:47.047900] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:15:53.549 [2024-05-16 07:28:47.047925] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:15:53.549 [2024-05-16 07:28:47.047940] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:15:53.549 [2024-05-16 07:28:47.047951] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:15:53.549 [2024-05-16 07:28:47.047969] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:15:53.549 [2024-05-16 07:28:47.048028] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:15:53.549 [2024-05-16 07:28:47.048042] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:15:53.549 [2024-05-16 07:28:47.048069] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:15:53.549 [2024-05-16 07:28:47.048089] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:15:53.549 passed 00:15:53.549 Test: lvol_resize ...[2024-05-16 07:28:47.048101] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:15:53.549 passed 00:15:53.549 Test: lvol_set_read_only ...passed 00:15:53.549 Test: test_lvs_load ...passed 00:15:53.549 Test: lvols_load ...passed 00:15:53.549 Test: lvol_open ...passed 00:15:53.549 Test: lvol_snapshot ...passed 00:15:53.549 Test: lvol_snapshot_fail ...passed 00:15:53.549 Test: lvol_clone ...passed 00:15:53.549 Test: lvol_clone_fail ...passed 00:15:53.549 Test: lvol_iter_clones ...passed 00:15:53.549 Test: lvol_refcnt ...passed 00:15:53.549 Test: lvol_names ...passed 00:15:53.549 Test: lvol_create_thin_provisioned ...passed 00:15:53.549 Test: lvol_rename ...passed 00:15:53.549 Test: lvs_rename ...passed 00:15:53.549 Test: lvol_inflate ...passed 00:15:53.549 Test: lvol_decouple_parent ...passed 00:15:53.549 Test: lvol_get_xattr ...passed 00:15:53.549 Test: lvol_esnap_reload ...passed 00:15:53.549 Test: lvol_esnap_create_bad_args ...[2024-05-16 07:28:47.048153] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:15:53.549 [2024-05-16 07:28:47.048163] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:15:53.550 [2024-05-16 07:28:47.048185] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:15:53.550 [2024-05-16 07:28:47.048209] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:15:53.550 [2024-05-16 07:28:47.048279] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:15:53.550 [2024-05-16 07:28:47.048327] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:15:53.550 [2024-05-16 07:28:47.048370] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol eec002a4-1355-11ef-8e8f-9dd684e56d79 because it is still open 00:15:53.550 [2024-05-16 07:28:47.048387] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:15:53.550 [2024-05-16 07:28:47.048400] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:15:53.550 [2024-05-16 07:28:47.048418] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:15:53.550 [2024-05-16 07:28:47.048454] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:15:53.550 [2024-05-16 07:28:47.048469] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:15:53.550 [2024-05-16 07:28:47.048493] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:15:53.550 [2024-05-16 07:28:47.048513] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:15:53.550 [2024-05-16 07:28:47.048533] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:15:53.550 [2024-05-16 07:28:47.048571] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:15:53.550 passed 00:15:53.550 Test: lvol_esnap_create_delete ...passed 00:15:53.550 Test: lvol_esnap_load_esnaps ...passed 00:15:53.550 Test: lvol_esnap_missing ...passed 00:15:53.550 Test: lvol_esnap_hotplug ... 00:15:53.550 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:15:53.550 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:15:53.550 [2024-05-16 07:28:47.048582] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:15:53.550 [2024-05-16 07:28:47.048617] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:15:53.550 [2024-05-16 07:28:47.048631] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:15:53.550 [2024-05-16 07:28:47.048654] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:15:53.550 [2024-05-16 07:28:47.048691] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:15:53.550 [2024-05-16 07:28:47.048712] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:15:53.550 [2024-05-16 07:28:47.048722] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:15:53.550 [2024-05-16 07:28:47.048787] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol eec01315-1355-11ef-8e8f-9dd684e56d79: failed to create esnap bs_dev: error -12 00:15:53.550 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:15:53.550 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:15:53.550 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:15:53.550 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:15:53.550 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:15:53.550 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:15:53.550 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:15:53.550 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:15:53.550 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:15:53.550 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:15:53.550 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:15:53.550 passed 00:15:53.550 Test: lvol_get_by ...passed 00:15:53.550 Test: lvol_shallow_copy ...passed 00:15:53.550 Test: lvol_set_parent ...passed 00:15:53.550 Test: lvol_set_external_parent ...passed 00:15:53.550 00:15:53.550 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.550 suites 1 1 n/a 0 0 00:15:53.550 tests 37 37 37 0 0 00:15:53.550 asserts 1505 1505 1505 0 n/a 00:15:53.550 00:15:53.550 Elapsed time = 0.000 seconds 00:15:53.550 [2024-05-16 07:28:47.048830] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol eec014a9-1355-11ef-8e8f-9dd684e56d79: failed to create esnap bs_dev: error -12 00:15:53.550 [2024-05-16 07:28:47.048855] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol eec015c0-1355-11ef-8e8f-9dd684e56d79: failed to create esnap bs_dev: error -12 00:15:53.550 [2024-05-16 07:28:47.049016] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:15:53.550 [2024-05-16 07:28:47.049027] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol eec01c0f-1355-11ef-8e8f-9dd684e56d79 shallow copy, ext_dev must not be NULL 00:15:53.550 [2024-05-16 07:28:47.049053] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:15:53.550 [2024-05-16 07:28:47.049064] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:15:53.550 [2024-05-16 07:28:47.049083] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:15:53.550 [2024-05-16 07:28:47.049093] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:15:53.550 [2024-05-16 07:28:47.049104] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:15:53.550 00:15:53.550 real 0m0.009s 00:15:53.550 user 0m0.001s 00:15:53.550 sys 0m0.008s 00:15:53.550 07:28:47 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.550 ************************************ 00:15:53.550 END TEST unittest_lvol 00:15:53.550 ************************************ 00:15:53.550 07:28:47 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:53.550 07:28:47 unittest -- unit/unittest.sh@250 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:53.550 07:28:47 unittest -- unit/unittest.sh@251 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:15:53.550 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.550 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.550 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.550 ************************************ 00:15:53.550 START TEST unittest_nvme_rdma 00:15:53.550 ************************************ 00:15:53.550 07:28:47 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:15:53.550 00:15:53.550 00:15:53.550 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.550 http://cunit.sourceforge.net/ 00:15:53.550 00:15:53.550 00:15:53.550 Suite: nvme_rdma 00:15:53.550 Test: test_nvme_rdma_build_sgl_request ...passed 00:15:53.550 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:15:53.550 Test: test_nvme_rdma_build_contig_request ...passed 00:15:53.550 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:15:53.550 Test: test_nvme_rdma_create_reqs ...passed 00:15:53.550 Test: test_nvme_rdma_create_rsps ...passed 00:15:53.550 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-05-16 07:28:47.099883] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:15:53.550 [2024-05-16 07:28:47.100051] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:15:53.550 [2024-05-16 07:28:47.100065] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1689:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:15:53.550 [2024-05-16 07:28:47.100100] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1570:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:15:53.550 [2024-05-16 07:28:47.100134] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:15:53.550 [2024-05-16 07:28:47.100174] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:15:53.550 passed 00:15:53.550 Test: test_nvme_rdma_poller_create ...passed 00:15:53.550 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-05-16 07:28:47.100193] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:15:53.550 [2024-05-16 07:28:47.100202] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:15:53.550 [2024-05-16 07:28:47.100223] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:15:53.550 passed 00:15:53.550 Test: test_nvme_rdma_ctrlr_construct ...passed 00:15:53.550 Test: test_nvme_rdma_req_put_and_get ...passed 00:15:53.550 Test: test_nvme_rdma_req_init ...passed 00:15:53.550 Test: test_nvme_rdma_validate_cm_event ...passed 00:15:53.550 Test: test_nvme_rdma_qpair_init ...passed 00:15:53.550 Test: test_nvme_rdma_qpair_submit_request ...passed 00:15:53.550 Test: test_nvme_rdma_memory_domain ...passed 00:15:53.550 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:15:53.550 Test: test_rdma_get_memory_translation ...passed 00:15:53.550 Test: test_get_rdma_qpair_from_wc ...passed 00:15:53.550 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:15:53.550 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:15:53.550 Test: test_nvme_rdma_qpair_set_poller ...[2024-05-16 07:28:47.100273] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:15:53.550 [2024-05-16 07:28:47.100283] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:15:53.550 [2024-05-16 07:28:47.100316] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:15:53.550 [2024-05-16 07:28:47.100332] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:15:53.550 [2024-05-16 07:28:47.100341] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:15:53.550 [2024-05-16 07:28:47.100368] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:53.550 [2024-05-16 07:28:47.100382] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:15:53.550 passed 00:15:53.550 00:15:53.551 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.551 suites 1 1 n/a 0 0 00:15:53.551 tests 22 22 22 0 0 00:15:53.551 asserts 412 412 412 0 n/a 00:15:53.551 00:15:53.551 Elapsed time = 0.000 seconds 00:15:53.551 [2024-05-16 07:28:47.100417] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:15:53.551 [2024-05-16 07:28:47.100427] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:15:53.551 [2024-05-16 07:28:47.100436] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x82031cbc8 on poll group 0x82b0ec000 00:15:53.551 [2024-05-16 07:28:47.100445] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:15:53.551 [2024-05-16 07:28:47.100453] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:15:53.551 [2024-05-16 07:28:47.100461] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x82031cbc8 on poll group 0x82b0ec000 00:15:53.551 [2024-05-16 07:28:47.100500] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:15:53.551 00:15:53.551 real 0m0.008s 00:15:53.551 user 0m0.000s 00:15:53.551 sys 0m0.008s 00:15:53.551 07:28:47 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.551 ************************************ 00:15:53.551 END TEST unittest_nvme_rdma 00:15:53.551 ************************************ 00:15:53.551 07:28:47 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 07:28:47 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:15:53.812 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.812 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.812 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.812 ************************************ 00:15:53.812 START TEST unittest_nvmf_transport 00:15:53.812 ************************************ 00:15:53.812 07:28:47 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:15:53.812 00:15:53.812 00:15:53.812 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.812 http://cunit.sourceforge.net/ 00:15:53.812 00:15:53.812 00:15:53.812 Suite: nvmf 00:15:53.812 Test: test_spdk_nvmf_transport_create ...[2024-05-16 07:28:47.142041] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:15:53.812 [2024-05-16 07:28:47.142239] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:15:53.813 [2024-05-16 07:28:47.142255] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:15:53.813 passed 00:15:53.813 Test: test_nvmf_transport_poll_group_create ...[2024-05-16 07:28:47.142287] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:15:53.813 passed 00:15:53.813 Test: test_spdk_nvmf_transport_opts_init ...passed 00:15:53.813 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:15:53.813 00:15:53.813 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.813 suites 1 1 n/a 0 0 00:15:53.813 tests 4 4 4 0 0 00:15:53.813 asserts 49 49 49 0 n/a 00:15:53.813 00:15:53.813 Elapsed time = 0.000 seconds 00:15:53.813 [2024-05-16 07:28:47.142316] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:15:53.813 [2024-05-16 07:28:47.142329] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:15:53.813 [2024-05-16 07:28:47.142341] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:15:53.813 00:15:53.813 real 0m0.005s 00:15:53.813 user 0m0.004s 00:15:53.813 sys 0m0.000s 00:15:53.813 07:28:47 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.813 07:28:47 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:15:53.813 ************************************ 00:15:53.813 END TEST unittest_nvmf_transport 00:15:53.813 ************************************ 00:15:53.813 07:28:47 unittest -- unit/unittest.sh@253 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:15:53.813 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.813 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.813 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.813 ************************************ 00:15:53.813 START TEST unittest_rdma 00:15:53.813 ************************************ 00:15:53.813 07:28:47 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:15:53.813 00:15:53.813 00:15:53.813 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.813 http://cunit.sourceforge.net/ 00:15:53.813 00:15:53.813 00:15:53.813 Suite: rdma_common 00:15:53.813 Test: test_spdk_rdma_pd ...passed 00:15:53.813 00:15:53.813 [2024-05-16 07:28:47.181098] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:15:53.813 [2024-05-16 07:28:47.181281] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:15:53.813 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.813 suites 1 1 n/a 0 0 00:15:53.813 tests 1 1 1 0 0 00:15:53.813 asserts 31 31 31 0 n/a 00:15:53.813 00:15:53.813 Elapsed time = 0.000 seconds 00:15:53.813 00:15:53.813 real 0m0.005s 00:15:53.813 user 0m0.005s 00:15:53.813 sys 0m0.000s 00:15:53.813 07:28:47 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.813 ************************************ 00:15:53.813 END TEST unittest_rdma 00:15:53.813 ************************************ 00:15:53.813 07:28:47 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:53.813 07:28:47 unittest -- unit/unittest.sh@256 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:53.813 07:28:47 unittest -- unit/unittest.sh@260 -- # run_test unittest_nvmf unittest_nvmf 00:15:53.813 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.813 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.813 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.813 ************************************ 00:15:53.813 START TEST unittest_nvmf 00:15:53.813 ************************************ 00:15:53.813 07:28:47 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:15:53.813 07:28:47 unittest.unittest_nvmf -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:15:53.813 00:15:53.813 00:15:53.813 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.813 http://cunit.sourceforge.net/ 00:15:53.813 00:15:53.813 00:15:53.813 Suite: nvmf 00:15:53.813 Test: test_get_log_page ...[2024-05-16 07:28:47.223175] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:15:53.813 passed 00:15:53.813 Test: test_process_fabrics_cmd ...passed 00:15:53.813 Test: test_connect ...[2024-05-16 07:28:47.223428] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:15:53.813 [2024-05-16 07:28:47.223507] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:15:53.813 [2024-05-16 07:28:47.223524] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:15:53.813 [2024-05-16 07:28:47.223536] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:15:53.813 [2024-05-16 07:28:47.223548] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:15:53.813 [2024-05-16 07:28:47.223559] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:15:53.813 [2024-05-16 07:28:47.223571] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 888:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:15:53.813 [2024-05-16 07:28:47.223588] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:15:53.813 [2024-05-16 07:28:47.223599] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:15:53.813 [2024-05-16 07:28:47.223614] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:15:53.813 [2024-05-16 07:28:47.223627] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:15:53.813 [2024-05-16 07:28:47.223648] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:15:53.813 passed 00:15:53.813 Test: test_get_ns_id_desc_list ...passed 00:15:53.813 Test: test_identify_ns ...passed 00:15:53.813 Test: test_identify_ns_iocs_specific ...[2024-05-16 07:28:47.223661] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 683:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:15:53.813 [2024-05-16 07:28:47.223679] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 690:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:15:53.813 [2024-05-16 07:28:47.223693] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 714:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:15:53.813 [2024-05-16 07:28:47.223708] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:15:53.813 [2024-05-16 07:28:47.223725] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:15:53.813 [2024-05-16 07:28:47.223737] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:15:53.813 [2024-05-16 07:28:47.223786] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:53.813 [2024-05-16 07:28:47.223843] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:15:53.813 [2024-05-16 07:28:47.223868] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:53.813 [2024-05-16 07:28:47.223897] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:53.813 passed 00:15:53.813 Test: test_reservation_write_exclusive ...passed 00:15:53.813 Test: test_reservation_exclusive_access ...passed 00:15:53.813 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:15:53.813 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:15:53.813 Test: test_reservation_notification_log_page ...passed 00:15:53.813 Test: test_get_dif_ctx ...passed 00:15:53.813 Test: test_set_get_features ...passed 00:15:53.813 Test: test_identify_ctrlr ...passed 00:15:53.813 Test: test_identify_ctrlr_iocs_specific ...[2024-05-16 07:28:47.223953] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:53.813 [2024-05-16 07:28:47.224047] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:15:53.813 [2024-05-16 07:28:47.224058] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:15:53.813 [2024-05-16 07:28:47.224068] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:15:53.813 [2024-05-16 07:28:47.224079] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:15:53.813 passed 00:15:53.813 Test: test_custom_admin_cmd ...passed 00:15:53.813 Test: test_fused_compare_and_write ...passed 00:15:53.813 Test: test_multi_async_event_reqs ...passed 00:15:53.813 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:15:53.813 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:15:53.813 Test: test_multi_async_events ...passed 00:15:53.813 Test: test_rae ...passed 00:15:53.813 Test: test_nvmf_ctrlr_create_destruct ...passed 00:15:53.813 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:15:53.813 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:15:53.813 Test: test_zcopy_read ...passed 00:15:53.813 Test: test_zcopy_write ...passed 00:15:53.813 Test: test_nvmf_property_set ...passed 00:15:53.813 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-05-16 07:28:47.224167] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:15:53.813 [2024-05-16 07:28:47.224184] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:15:53.813 [2024-05-16 07:28:47.224195] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:15:53.813 [2024-05-16 07:28:47.224264] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:15:53.813 [2024-05-16 07:28:47.224277] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:15:53.814 [2024-05-16 07:28:47.224313] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:15:53.814 passed 00:15:53.814 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:15:53.814 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:15:53.814 Test: test_nvmf_check_qpair_active ...passed 00:15:53.814 00:15:53.814 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.814 suites 1 1 n/a 0 0 00:15:53.814 tests 32 32 32 0 0 00:15:53.814 asserts 977 977 977 0 n/a 00:15:53.814 00:15:53.814 Elapsed time = 0.000 seconds[2024-05-16 07:28:47.224323] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:15:53.814 [2024-05-16 07:28:47.224336] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:15:53.814 [2024-05-16 07:28:47.224347] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:15:53.814 [2024-05-16 07:28:47.224357] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:15:53.814 [2024-05-16 07:28:47.224381] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4678:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:15:53.814 [2024-05-16 07:28:47.224393] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4692:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:15:53.814 [2024-05-16 07:28:47.224403] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:15:53.814 [2024-05-16 07:28:47.224413] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:15:53.814 [2024-05-16 07:28:47.224423] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4704:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:15:53.814 00:15:53.814 07:28:47 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:15:53.814 00:15:53.814 00:15:53.814 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.814 http://cunit.sourceforge.net/ 00:15:53.814 00:15:53.814 00:15:53.814 Suite: nvmf 00:15:53.814 Test: test_get_rw_params ...passed 00:15:53.814 Test: test_get_rw_ext_params ...passed 00:15:53.814 Test: test_lba_in_range ...passed 00:15:53.814 Test: test_get_dif_ctx ...passed 00:15:53.814 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:15:53.814 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-05-16 07:28:47.231626] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:15:53.814 passed 00:15:53.814 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:15:53.814 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:15:53.814 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:15:53.814 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:15:53.814 00:15:53.814 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.814 suites 1 1 n/a 0 0 00:15:53.814 tests 10 10 10 0 0 00:15:53.814 asserts 159 159 159 0 n/a 00:15:53.814 00:15:53.814 Elapsed time = 0.000 seconds 00:15:53.814 [2024-05-16 07:28:47.231809] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:15:53.814 [2024-05-16 07:28:47.231824] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:15:53.814 [2024-05-16 07:28:47.231839] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:15:53.814 [2024-05-16 07:28:47.231850] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:15:53.814 [2024-05-16 07:28:47.231864] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:15:53.814 [2024-05-16 07:28:47.231875] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:15:53.814 [2024-05-16 07:28:47.231887] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:15:53.814 [2024-05-16 07:28:47.231898] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:15:53.814 07:28:47 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:15:53.814 00:15:53.814 00:15:53.814 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.814 http://cunit.sourceforge.net/ 00:15:53.814 00:15:53.814 00:15:53.814 Suite: nvmf 00:15:53.814 Test: test_discovery_log ...passed 00:15:53.814 Test: test_discovery_log_with_filters ...passed 00:15:53.814 00:15:53.814 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.814 suites 1 1 n/a 0 0 00:15:53.814 tests 2 2 2 0 0 00:15:53.814 asserts 238 238 238 0 n/a 00:15:53.814 00:15:53.814 Elapsed time = 0.000 seconds 00:15:53.814 07:28:47 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:15:53.814 00:15:53.814 00:15:53.814 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.814 http://cunit.sourceforge.net/ 00:15:53.814 00:15:53.814 00:15:53.814 Suite: nvmf 00:15:53.814 Test: nvmf_test_create_subsystem ...[2024-05-16 07:28:47.242498] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:15:53.814 [2024-05-16 07:28:47.242704] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:15:53.814 [2024-05-16 07:28:47.242732] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:15:53.814 [2024-05-16 07:28:47.242748] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:15:53.814 [2024-05-16 07:28:47.242765] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:15:53.814 [2024-05-16 07:28:47.242779] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:15:53.814 [2024-05-16 07:28:47.242794] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:15:53.814 [2024-05-16 07:28:47.242808] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:15:53.814 [2024-05-16 07:28:47.242823] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:15:53.814 [2024-05-16 07:28:47.242838] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:15:53.814 [2024-05-16 07:28:47.242852] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:15:53.814 [2024-05-16 07:28:47.242866] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:15:53.814 [2024-05-16 07:28:47.242889] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:15:53.814 [2024-05-16 07:28:47.242904] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:15:53.814 [2024-05-16 07:28:47.242937] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:15:53.814 [2024-05-16 07:28:47.242951] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:15:53.814 [2024-05-16 07:28:47.242969] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:15:53.814 [2024-05-16 07:28:47.242983] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:15:53.814 [2024-05-16 07:28:47.243015] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:15:53.814 [2024-05-16 07:28:47.243030] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:15:53.814 [2024-05-16 07:28:47.243045] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:15:53.814 [2024-05-16 07:28:47.243059] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:15:53.814 passed 00:15:53.814 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:15:53.814 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:15:53.814 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:15:53.814 Test: test_spdk_nvmf_ns_visible ...passed 00:15:53.814 Test: test_reservation_register ...passed 00:15:53.814 Test: test_reservation_register_with_ptpl ...passed 00:15:53.814 Test: test_reservation_acquire_preempt_1 ...passed 00:15:53.814 Test: test_reservation_acquire_release_with_ptpl ...passed 00:15:53.814 Test: test_reservation_release ...passed 00:15:53.814 Test: test_reservation_unregister_notification ...passed 00:15:53.814 Test: test_reservation_release_notification ...[2024-05-16 07:28:47.243127] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:15:53.814 [2024-05-16 07:28:47.243143] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2010:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:15:53.814 [2024-05-16 07:28:47.243176] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2139:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:15:53.814 [2024-05-16 07:28:47.243208] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:15:53.815 [2024-05-16 07:28:47.243300] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243327] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3135:nvmf_ns_reservation_register: *ERROR*: No registrant 00:15:53.815 [2024-05-16 07:28:47.243528] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243708] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243738] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 passed 00:15:53.815 Test: test_reservation_release_notification_write_exclusive ...passed 00:15:53.815 Test: test_reservation_clear_notification ...passed 00:15:53.815 Test: test_reservation_preempt_notification ...passed 00:15:53.815 Test: test_spdk_nvmf_ns_event ...passed 00:15:53.815 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:15:53.815 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:15:53.815 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:15:53.815 Test: test_nvmf_ns_reservation_report ...passed 00:15:53.815 Test: test_nvmf_nqn_is_valid ...passed 00:15:53.815 Test: test_nvmf_ns_reservation_restore ...passed 00:15:53.815 Test: test_nvmf_subsystem_state_change ...passed 00:15:53.815 Test: test_nvmf_reservation_custom_ops ...[2024-05-16 07:28:47.243763] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243787] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243810] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243833] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3079:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:15:53.815 [2024-05-16 07:28:47.243928] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:15:53.815 [2024-05-16 07:28:47.243953] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:15:53.815 [2024-05-16 07:28:47.243976] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3441:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:15:53.815 [2024-05-16 07:28:47.244008] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:15:53.815 [2024-05-16 07:28:47.244024] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:eedddcad-1355-11ef-8e8f-9dd684e56d7": uuid is not the correct length 00:15:53.815 [2024-05-16 07:28:47.244039] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:15:53.815 [2024-05-16 07:28:47.244079] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2634:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:15:53.815 passed 00:15:53.815 00:15:53.815 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.815 suites 1 1 n/a 0 0 00:15:53.815 tests 24 24 24 0 0 00:15:53.815 asserts 499 499 499 0 n/a 00:15:53.815 00:15:53.815 Elapsed time = 0.000 seconds 00:15:53.815 07:28:47 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:15:53.815 00:15:53.815 00:15:53.815 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.815 http://cunit.sourceforge.net/ 00:15:53.815 00:15:53.815 00:15:53.815 Suite: nvmf 00:15:53.815 Test: test_nvmf_tcp_create ...[2024-05-16 07:28:47.255554] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:15:53.815 passed 00:15:53.815 Test: test_nvmf_tcp_destroy ...passed 00:15:53.815 Test: test_nvmf_tcp_poll_group_create ...passed 00:15:53.815 Test: test_nvmf_tcp_send_c2h_data ...passed 00:15:53.815 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:15:53.815 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:15:53.815 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:15:53.815 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-05-16 07:28:47.268909] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.268951] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.268970] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.268986] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 passed 00:15:53.815 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:15:53.815 Test: test_nvmf_tcp_icreq_handle ...[2024-05-16 07:28:47.269001] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269050] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:15:53.815 [2024-05-16 07:28:47.269067] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269081] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19c00 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269096] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:15:53.815 passed 00:15:53.815 Test: test_nvmf_tcp_check_xfer_type ...passed 00:15:53.815 Test: test_nvmf_tcp_invalid_sgl ...passed 00:15:53.815 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-16 07:28:47.269110] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19c00 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269125] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269139] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19c00 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269157] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269172] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19c00 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269207] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2509:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:15:53.815 [2024-05-16 07:28:47.269224] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269238] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19c00 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269256] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820c19488 00:15:53.815 [2024-05-16 07:28:47.269270] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269284] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269299] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2299:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820c19cf8 00:15:53.815 [2024-05-16 07:28:47.269316] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269330] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269344] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:15:53.815 [2024-05-16 07:28:47.269358] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269373] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269390] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:15:53.815 [2024-05-16 07:28:47.269406] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 passed 00:15:53.815 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-05-16 07:28:47.269420] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269435] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269449] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269464] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269478] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269492] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269507] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269534] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269550] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269565] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269579] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 [2024-05-16 07:28:47.269594] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:15:53.815 [2024-05-16 07:28:47.269608] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1599:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c19cf8 is same with the state(5) to be set 00:15:53.815 passed 00:15:53.815 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-05-16 07:28:47.277370] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:15:53.815 passed 00:15:53.816 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-05-16 07:28:47.277410] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:15:53.816 [2024-05-16 07:28:47.277654] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:15:53.816 passed 00:15:53.816 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:15:53.816 00:15:53.816 [2024-05-16 07:28:47.277677] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:15:53.816 [2024-05-16 07:28:47.277816] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:15:53.816 [2024-05-16 07:28:47.277836] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:15:53.816 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.816 suites 1 1 n/a 0 0 00:15:53.816 tests 17 17 17 0 0 00:15:53.816 asserts 222 222 222 0 n/a 00:15:53.816 00:15:53.816 Elapsed time = 0.023 seconds 00:15:53.816 07:28:47 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:15:53.816 00:15:53.816 00:15:53.816 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.816 http://cunit.sourceforge.net/ 00:15:53.816 00:15:53.816 00:15:53.816 Suite: nvmf 00:15:53.816 Test: test_nvmf_tgt_create_poll_group ...passed 00:15:53.816 00:15:53.816 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.816 suites 1 1 n/a 0 0 00:15:53.816 tests 1 1 1 0 0 00:15:53.816 asserts 17 17 17 0 n/a 00:15:53.816 00:15:53.816 Elapsed time = 0.008 seconds 00:15:53.816 00:15:53.816 real 0m0.072s 00:15:53.816 user 0m0.029s 00:15:53.816 sys 0m0.044s 00:15:53.816 07:28:47 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.816 07:28:47 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:15:53.816 ************************************ 00:15:53.816 END TEST unittest_nvmf 00:15:53.816 ************************************ 00:15:53.816 07:28:47 unittest -- unit/unittest.sh@261 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:53.816 07:28:47 unittest -- unit/unittest.sh@266 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:53.816 07:28:47 unittest -- unit/unittest.sh@267 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:15:53.816 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.816 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.816 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.816 ************************************ 00:15:53.816 START TEST unittest_nvmf_rdma 00:15:53.816 ************************************ 00:15:53.816 07:28:47 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:15:53.816 00:15:53.816 00:15:53.816 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.816 http://cunit.sourceforge.net/ 00:15:53.816 00:15:53.816 00:15:53.816 Suite: nvmf 00:15:53.816 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-16 07:28:47.335425] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1861:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:15:53.816 [2024-05-16 07:28:47.335757] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1911:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:15:53.816 [2024-05-16 07:28:47.335795] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1911:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:15:53.816 passed 00:15:53.816 Test: test_spdk_nvmf_rdma_request_process ...passed 00:15:53.816 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:15:53.816 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:15:53.816 Test: test_nvmf_rdma_opts_init ...passed 00:15:53.816 Test: test_nvmf_rdma_request_free_data ...passed 00:15:53.816 Test: test_nvmf_rdma_resources_create ...passed 00:15:53.816 Test: test_nvmf_rdma_qpair_compare ...passed 00:15:53.816 Test: test_nvmf_rdma_resize_cq ...[2024-05-16 07:28:47.336982] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 950:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:15:53.816 Using CQ of insufficient size may lead to CQ overrun 00:15:53.816 [2024-05-16 07:28:47.337010] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:15:53.816 [2024-05-16 07:28:47.337075] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:15:53.816 passed 00:15:53.816 00:15:53.816 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.816 suites 1 1 n/a 0 0 00:15:53.816 tests 9 9 9 0 0 00:15:53.816 asserts 579 579 579 0 n/a 00:15:53.816 00:15:53.816 Elapsed time = 0.000 seconds 00:15:53.816 00:15:53.816 real 0m0.009s 00:15:53.816 user 0m0.009s 00:15:53.816 sys 0m0.006s 00:15:53.816 07:28:47 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.816 07:28:47 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:53.816 ************************************ 00:15:53.816 END TEST unittest_nvmf_rdma 00:15:53.816 ************************************ 00:15:53.816 07:28:47 unittest -- unit/unittest.sh@270 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:53.816 07:28:47 unittest -- unit/unittest.sh@274 -- # run_test unittest_scsi unittest_scsi 00:15:53.816 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:53.816 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.816 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:53.816 ************************************ 00:15:53.816 START TEST unittest_scsi 00:15:53.816 ************************************ 00:15:53.816 07:28:47 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:15:53.816 07:28:47 unittest.unittest_scsi -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:15:54.079 00:15:54.079 00:15:54.079 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.079 http://cunit.sourceforge.net/ 00:15:54.079 00:15:54.079 00:15:54.079 Suite: dev_suite 00:15:54.079 Test: dev_destruct_null_dev ...passed 00:15:54.079 Test: dev_destruct_zero_luns ...passed 00:15:54.079 Test: dev_destruct_null_lun ...passed 00:15:54.079 Test: dev_destruct_success ...passed 00:15:54.079 Test: dev_construct_num_luns_zero ...[2024-05-16 07:28:47.379748] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:15:54.079 passed 00:15:54.079 Test: dev_construct_no_lun_zero ...[2024-05-16 07:28:47.379996] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:15:54.079 passed 00:15:54.079 Test: dev_construct_null_lun ...passed 00:15:54.079 Test: dev_construct_name_too_long ...passed 00:15:54.079 Test: dev_construct_success ...passed 00:15:54.079 Test: dev_construct_success_lun_zero_not_first ...passed 00:15:54.079 Test: dev_queue_mgmt_task_success ...passed 00:15:54.079 Test: dev_queue_task_success ...passed 00:15:54.079 Test: dev_stop_success ...passed 00:15:54.079 Test: dev_add_port_max_ports ...passed 00:15:54.079 Test: dev_add_port_construct_failure1 ...passed 00:15:54.079 Test: dev_add_port_construct_failure2 ...[2024-05-16 07:28:47.380025] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:15:54.079 [2024-05-16 07:28:47.380056] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:15:54.079 [2024-05-16 07:28:47.380114] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:15:54.079 [2024-05-16 07:28:47.380137] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:15:54.079 [2024-05-16 07:28:47.380158] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:15:54.079 passed 00:15:54.079 Test: dev_add_port_success1 ...passed 00:15:54.079 Test: dev_add_port_success2 ...passed 00:15:54.079 Test: dev_add_port_success3 ...passed 00:15:54.079 Test: dev_find_port_by_id_num_ports_zero ...passed 00:15:54.079 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:15:54.079 Test: dev_find_port_by_id_success ...passed 00:15:54.079 Test: dev_add_lun_bdev_not_found ...passed 00:15:54.079 Test: dev_add_lun_no_free_lun_id ...passed 00:15:54.079 Test: dev_add_lun_success1 ...passed 00:15:54.079 Test: dev_add_lun_success2 ...passed 00:15:54.079 Test: dev_check_pending_tasks ...passed 00:15:54.079 Test: dev_iterate_luns ...passed 00:15:54.079 Test: dev_find_free_lun ...[2024-05-16 07:28:47.380472] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:15:54.079 passed 00:15:54.079 00:15:54.079 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.079 suites 1 1 n/a 0 0 00:15:54.079 tests 29 29 29 0 0 00:15:54.079 asserts 97 97 97 0 n/a 00:15:54.079 00:15:54.079 Elapsed time = 0.000 seconds 00:15:54.079 07:28:47 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:15:54.079 00:15:54.079 00:15:54.079 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.079 http://cunit.sourceforge.net/ 00:15:54.079 00:15:54.079 00:15:54.079 Suite: lun_suite 00:15:54.079 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-05-16 07:28:47.386912] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:15:54.079 passed 00:15:54.079 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:15:54.079 Test: lun_task_mgmt_execute_lun_reset ...passed 00:15:54.079 Test: lun_task_mgmt_execute_target_reset ...passed 00:15:54.079 Test: lun_task_mgmt_execute_invalid_case ...passed 00:15:54.079 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:15:54.079 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:15:54.079 Test: lun_append_task_null_lun_not_supported ...passed 00:15:54.079 Test: lun_execute_scsi_task_pending ...passed 00:15:54.079 Test: lun_execute_scsi_task_complete ...passed 00:15:54.079 Test: lun_execute_scsi_task_resize ...[2024-05-16 07:28:47.387117] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:15:54.079 [2024-05-16 07:28:47.387141] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:15:54.079 passed 00:15:54.079 Test: lun_destruct_success ...passed 00:15:54.079 Test: lun_construct_null_ctx ...[2024-05-16 07:28:47.387181] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:15:54.079 passed 00:15:54.079 Test: lun_construct_success ...passed 00:15:54.079 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:15:54.079 Test: lun_reset_task_suspend_scsi_task ...passed 00:15:54.079 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:15:54.079 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:15:54.079 00:15:54.079 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.079 suites 1 1 n/a 0 0 00:15:54.079 tests 18 18 18 0 0 00:15:54.079 asserts 153 153 153 0 n/a 00:15:54.079 00:15:54.079 Elapsed time = 0.000 seconds 00:15:54.079 07:28:47 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:15:54.079 00:15:54.079 00:15:54.079 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.079 http://cunit.sourceforge.net/ 00:15:54.079 00:15:54.079 00:15:54.079 Suite: scsi_suite 00:15:54.079 Test: scsi_init ...passed 00:15:54.079 00:15:54.079 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.079 suites 1 1 n/a 0 0 00:15:54.079 tests 1 1 1 0 0 00:15:54.079 asserts 1 1 1 0 n/a 00:15:54.079 00:15:54.079 Elapsed time = 0.000 seconds 00:15:54.079 07:28:47 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:15:54.079 00:15:54.079 00:15:54.079 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.079 http://cunit.sourceforge.net/ 00:15:54.079 00:15:54.079 00:15:54.079 Suite: translation_suite 00:15:54.079 Test: mode_select_6_test ...passed 00:15:54.079 Test: mode_select_6_test2 ...passed 00:15:54.079 Test: mode_sense_6_test ...passed 00:15:54.079 Test: mode_sense_10_test ...passed 00:15:54.079 Test: inquiry_evpd_test ...passed 00:15:54.079 Test: inquiry_standard_test ...passed 00:15:54.079 Test: inquiry_overflow_test ...passed 00:15:54.079 Test: task_complete_test ...passed 00:15:54.079 Test: lba_range_test ...passed 00:15:54.079 Test: xfer_len_test ...[2024-05-16 07:28:47.400242] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:15:54.079 passed 00:15:54.079 Test: xfer_test ...passed 00:15:54.079 Test: scsi_name_padding_test ...passed 00:15:54.079 Test: get_dif_ctx_test ...passed 00:15:54.079 Test: unmap_split_test ...passed 00:15:54.079 00:15:54.079 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.079 suites 1 1 n/a 0 0 00:15:54.079 tests 14 14 14 0 0 00:15:54.079 asserts 1205 1205 1205 0 n/a 00:15:54.079 00:15:54.079 Elapsed time = 0.000 seconds 00:15:54.079 07:28:47 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:15:54.079 00:15:54.079 00:15:54.079 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.079 http://cunit.sourceforge.net/ 00:15:54.079 00:15:54.079 00:15:54.079 Suite: reservation_suite 00:15:54.079 Test: test_reservation_register ...[2024-05-16 07:28:47.405809] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:54.079 passed 00:15:54.079 Test: test_reservation_reserve ...[2024-05-16 07:28:47.405954] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:54.079 passed 00:15:54.079 Test: test_reservation_preempt_non_all_regs ...passed 00:15:54.079 Test: test_reservation_preempt_all_regs ...passed 00:15:54.079 Test: test_reservation_cmds_conflict ...[2024-05-16 07:28:47.405967] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:15:54.080 [2024-05-16 07:28:47.405982] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:15:54.080 [2024-05-16 07:28:47.405995] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:54.080 [2024-05-16 07:28:47.406004] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:15:54.080 [2024-05-16 07:28:47.406021] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:54.080 [2024-05-16 07:28:47.406034] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:54.080 [2024-05-16 07:28:47.406043] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:15:54.080 passed 00:15:54.080 Test: test_scsi2_reserve_release ...passed 00:15:54.080 Test: test_pr_with_scsi2_reserve_release ...passed 00:15:54.080 00:15:54.080 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.080 suites 1 1 n/a 0 0 00:15:54.080 tests 7 7 7 0 0 00:15:54.080 asserts 257 257 257 0 n/a 00:15:54.080 00:15:54.080 Elapsed time = 0.000 seconds 00:15:54.080 [2024-05-16 07:28:47.406051] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:15:54.080 [2024-05-16 07:28:47.406059] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:15:54.080 [2024-05-16 07:28:47.406066] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:15:54.080 [2024-05-16 07:28:47.406074] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:15:54.080 [2024-05-16 07:28:47.406090] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:15:54.080 00:15:54.080 real 0m0.032s 00:15:54.080 user 0m0.008s 00:15:54.080 sys 0m0.025s 00:15:54.080 07:28:47 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.080 07:28:47 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:15:54.080 ************************************ 00:15:54.080 END TEST unittest_scsi 00:15:54.080 ************************************ 00:15:54.080 07:28:47 unittest -- unit/unittest.sh@277 -- # uname -s 00:15:54.080 07:28:47 unittest -- unit/unittest.sh@277 -- # '[' FreeBSD = Linux ']' 00:15:54.080 07:28:47 unittest -- unit/unittest.sh@280 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.080 ************************************ 00:15:54.080 START TEST unittest_thread 00:15:54.080 ************************************ 00:15:54.080 07:28:47 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:15:54.080 00:15:54.080 00:15:54.080 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.080 http://cunit.sourceforge.net/ 00:15:54.080 00:15:54.080 00:15:54.080 Suite: io_channel 00:15:54.080 Test: thread_alloc ...passed 00:15:54.080 Test: thread_send_msg ...passed 00:15:54.080 Test: thread_poller ...passed 00:15:54.080 Test: poller_pause ...passed 00:15:54.080 Test: thread_for_each ...passed 00:15:54.080 Test: for_each_channel_remove ...passed 00:15:54.080 Test: for_each_channel_unreg ...[2024-05-16 07:28:47.449980] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x820d74e94 already registered (old:0x82bca3000 new:0x82bca3180) 00:15:54.080 passed 00:15:54.080 Test: thread_name ...passed 00:15:54.080 Test: channel ...passed 00:15:54.080 Test: channel_destroy_races ...[2024-05-16 07:28:47.450488] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x2276c8 00:15:54.080 passed 00:15:54.080 Test: thread_exit_test ...passed 00:15:54.080 Test: thread_update_stats_test ...[2024-05-16 07:28:47.450952] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 636:thread_exit: *ERROR*: thread 0x82bc68a80 got timeout, and move it to the exited state forcefully 00:15:54.080 passed 00:15:54.080 Test: nested_channel ...passed 00:15:54.080 Test: device_unregister_and_thread_exit_race ...passed 00:15:54.080 Test: cache_closest_timed_poller ...passed 00:15:54.080 Test: multi_timed_pollers_have_same_expiration ...passed 00:15:54.080 Test: io_device_lookup ...passed 00:15:54.080 Test: spdk_spin ...[2024-05-16 07:28:47.451969] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:15:54.080 [2024-05-16 07:28:47.451983] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820d74e90 00:15:54.080 [2024-05-16 07:28:47.451994] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:15:54.080 [2024-05-16 07:28:47.452130] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:15:54.080 [2024-05-16 07:28:47.452140] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820d74e90 00:15:54.080 [2024-05-16 07:28:47.452151] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:15:54.080 [2024-05-16 07:28:47.452161] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820d74e90 00:15:54.080 [2024-05-16 07:28:47.452171] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:15:54.080 [2024-05-16 07:28:47.452180] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820d74e90 00:15:54.080 [2024-05-16 07:28:47.452191] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:15:54.080 [2024-05-16 07:28:47.452200] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820d74e90 00:15:54.080 passed 00:15:54.080 Test: for_each_channel_and_thread_exit_race ...passed 00:15:54.080 Test: for_each_thread_and_thread_exit_race ...passed 00:15:54.080 00:15:54.080 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.080 suites 1 1 n/a 0 0 00:15:54.080 tests 20 20 20 0 0 00:15:54.080 asserts 409 409 409 0 n/a 00:15:54.080 00:15:54.080 Elapsed time = 0.008 seconds 00:15:54.080 00:15:54.080 real 0m0.010s 00:15:54.080 user 0m0.010s 00:15:54.080 sys 0m0.000s 00:15:54.080 07:28:47 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.080 ************************************ 00:15:54.080 END TEST unittest_thread 00:15:54.080 ************************************ 00:15:54.080 07:28:47 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:15:54.080 07:28:47 unittest -- unit/unittest.sh@281 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.080 ************************************ 00:15:54.080 START TEST unittest_iobuf 00:15:54.080 ************************************ 00:15:54.080 07:28:47 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:15:54.080 00:15:54.080 00:15:54.080 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.080 http://cunit.sourceforge.net/ 00:15:54.080 00:15:54.080 00:15:54.080 Suite: io_channel 00:15:54.080 Test: iobuf ...passed 00:15:54.080 Test: iobuf_cache ...[2024-05-16 07:28:47.491954] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:15:54.080 [2024-05-16 07:28:47.492157] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:15:54.080 [2024-05-16 07:28:47.492208] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:15:54.080 [2024-05-16 07:28:47.492225] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:15:54.080 passed 00:15:54.080 00:15:54.080 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.080 suites 1 1 n/a 0 0 00:15:54.080 tests 2 2 2 0 0 00:15:54.080 asserts 107 107 107 0 n/a 00:15:54.080 00:15:54.080 Elapsed time = 0.000 seconds 00:15:54.080 [2024-05-16 07:28:47.492243] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:15:54.080 [2024-05-16 07:28:47.492258] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:15:54.080 00:15:54.080 real 0m0.005s 00:15:54.080 user 0m0.004s 00:15:54.080 sys 0m0.004s 00:15:54.080 07:28:47 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.080 ************************************ 00:15:54.080 END TEST unittest_iobuf 00:15:54.080 07:28:47 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:15:54.080 ************************************ 00:15:54.080 07:28:47 unittest -- unit/unittest.sh@282 -- # run_test unittest_util unittest_util 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.080 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.080 ************************************ 00:15:54.080 START TEST unittest_util 00:15:54.080 ************************************ 00:15:54.080 07:28:47 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:15:54.080 07:28:47 unittest.unittest_util -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:15:54.080 00:15:54.080 00:15:54.080 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.080 http://cunit.sourceforge.net/ 00:15:54.080 00:15:54.081 00:15:54.081 Suite: base64 00:15:54.081 Test: test_base64_get_encoded_strlen ...passed 00:15:54.081 Test: test_base64_get_decoded_len ...passed 00:15:54.081 Test: test_base64_encode ...passed 00:15:54.081 Test: test_base64_decode ...passed 00:15:54.081 Test: test_base64_urlsafe_encode ...passed 00:15:54.081 Test: test_base64_urlsafe_decode ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 6 6 6 0 0 00:15:54.081 asserts 112 112 112 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: bit_array 00:15:54.081 Test: test_1bit ...passed 00:15:54.081 Test: test_64bit ...passed 00:15:54.081 Test: test_find ...passed 00:15:54.081 Test: test_resize ...passed 00:15:54.081 Test: test_errors ...passed 00:15:54.081 Test: test_count ...passed 00:15:54.081 Test: test_mask_store_load ...passed 00:15:54.081 Test: test_mask_clear ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 8 8 8 0 0 00:15:54.081 asserts 5075 5075 5075 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: cpuset 00:15:54.081 Test: test_cpuset ...passed 00:15:54.081 Test: test_cpuset_parse ...passed 00:15:54.081 Test: test_cpuset_fmt ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 3 3 3 0 0 00:15:54.081 asserts 65 65 65 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 [2024-05-16 07:28:47.538598] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:15:54.081 [2024-05-16 07:28:47.538830] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:15:54.081 [2024-05-16 07:28:47.538843] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:15:54.081 [2024-05-16 07:28:47.538852] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:15:54.081 [2024-05-16 07:28:47.538859] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:15:54.081 [2024-05-16 07:28:47.538867] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:15:54.081 [2024-05-16 07:28:47.538875] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:15:54.081 [2024-05-16 07:28:47.538882] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: crc16 00:15:54.081 Test: test_crc16_t10dif ...passed 00:15:54.081 Test: test_crc16_t10dif_seed ...passed 00:15:54.081 Test: test_crc16_t10dif_copy ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 3 3 3 0 0 00:15:54.081 asserts 5 5 5 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: crc32_ieee 00:15:54.081 Test: test_crc32_ieee ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 1 1 1 0 0 00:15:54.081 asserts 1 1 1 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: crc32c 00:15:54.081 Test: test_crc32c ...passed 00:15:54.081 Test: test_crc32c_nvme ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 2 2 2 0 0 00:15:54.081 asserts 16 16 16 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: crc64 00:15:54.081 Test: test_crc64_nvme ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 1 1 1 0 0 00:15:54.081 asserts 4 4 4 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: string 00:15:54.081 Test: test_parse_ip_addr ...passed 00:15:54.081 Test: test_str_chomp ...passed 00:15:54.081 Test: test_parse_capacity ...passed 00:15:54.081 Test: test_sprintf_append_realloc ...passed 00:15:54.081 Test: test_strtol ...passed 00:15:54.081 Test: test_strtoll ...passed 00:15:54.081 Test: test_strarray ...passed 00:15:54.081 Test: test_strcpy_replace ...passed 00:15:54.081 00:15:54.081 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.081 suites 1 1 n/a 0 0 00:15:54.081 tests 8 8 8 0 0 00:15:54.081 asserts 161 161 161 0 n/a 00:15:54.081 00:15:54.081 Elapsed time = 0.000 seconds 00:15:54.081 07:28:47 unittest.unittest_util -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:15:54.081 00:15:54.081 00:15:54.081 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.081 http://cunit.sourceforge.net/ 00:15:54.081 00:15:54.081 00:15:54.081 Suite: dif 00:15:54.081 Test: dif_generate_and_verify_test ...[2024-05-16 07:28:47.571015] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:15:54.081 [2024-05-16 07:28:47.571357] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:15:54.081 [2024-05-16 07:28:47.571417] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:15:54.081 [2024-05-16 07:28:47.571471] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:15:54.081 passed 00:15:54.081 Test: dif_disable_check_test ...[2024-05-16 07:28:47.571530] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:15:54.081 [2024-05-16 07:28:47.571570] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:15:54.081 [2024-05-16 07:28:47.571695] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:15:54.081 [2024-05-16 07:28:47.571733] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:15:54.081 [2024-05-16 07:28:47.571770] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:15:54.081 passed 00:15:54.081 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-05-16 07:28:47.571894] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:15:54.081 [2024-05-16 07:28:47.571932] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:15:54.081 [2024-05-16 07:28:47.571970] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:15:54.081 [2024-05-16 07:28:47.572008] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:15:54.082 [2024-05-16 07:28:47.572045] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:54.082 [2024-05-16 07:28:47.572083] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:54.082 [2024-05-16 07:28:47.572119] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:54.082 [2024-05-16 07:28:47.572156] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:15:54.082 passed 00:15:54.082 Test: dif_apptag_mask_test ...[2024-05-16 07:28:47.572193] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:15:54.082 [2024-05-16 07:28:47.572230] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:15:54.082 [2024-05-16 07:28:47.572267] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:15:54.082 [2024-05-16 07:28:47.572306] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:15:54.082 [2024-05-16 07:28:47.572344] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:15:54.082 passed 00:15:54.082 Test: dif_sec_512_md_0_error_test ...passed 00:15:54.082 Test: dif_sec_4096_md_0_error_test ...passed 00:15:54.082 Test: dif_sec_4100_md_128_error_test ...passed 00:15:54.082 Test: dif_guard_seed_test ...passed 00:15:54.082 Test: dif_guard_value_test ...passed 00:15:54.082 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...[2024-05-16 07:28:47.572376] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:54.082 [2024-05-16 07:28:47.572389] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:54.082 [2024-05-16 07:28:47.572398] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:54.082 [2024-05-16 07:28:47.572408] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:15:54.082 [2024-05-16 07:28:47.572417] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:15:54.082 passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:15:54.082 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:54.082 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-16 07:28:47.577647] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd5c, Actual=fd4c 00:15:54.082 [2024-05-16 07:28:47.577950] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe31, Actual=fe21 00:15:54.082 [2024-05-16 07:28:47.578250] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.578549] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.578849] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.082 [2024-05-16 07:28:47.579150] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.082 [2024-05-16 07:28:47.579457] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=de67 00:15:54.082 [2024-05-16 07:28:47.579741] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=5084 00:15:54.082 [2024-05-16 07:28:47.580024] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753fd, Actual=1ab753ed 00:15:54.082 [2024-05-16 07:28:47.580414] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574670, Actual=38574660 00:15:54.082 [2024-05-16 07:28:47.580786] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.581183] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.581548] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=1000000061 00:15:54.082 [2024-05-16 07:28:47.581875] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=1000000061 00:15:54.082 [2024-05-16 07:28:47.582200] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=fa09cddc 00:15:54.082 [2024-05-16 07:28:47.582564] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=48919b 00:15:54.082 [2024-05-16 07:28:47.582951] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.082 [2024-05-16 07:28:47.583380] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:15:54.082 [2024-05-16 07:28:47.583772] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.584136] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.584468] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.082 [2024-05-16 07:28:47.584835] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.082 [2024-05-16 07:28:47.585165] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.082 [2024-05-16 07:28:47.585520] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=fe72ee2969dac644 00:15:54.082 passed 00:15:54.082 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-16 07:28:47.585771] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:15:54.082 [2024-05-16 07:28:47.585830] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:15:54.082 [2024-05-16 07:28:47.585895] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.585955] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.586016] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.082 [2024-05-16 07:28:47.586081] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.082 [2024-05-16 07:28:47.586150] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=de67 00:15:54.082 [2024-05-16 07:28:47.586192] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5084 00:15:54.082 [2024-05-16 07:28:47.586249] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:15:54.082 [2024-05-16 07:28:47.586318] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:15:54.082 [2024-05-16 07:28:47.586384] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.586444] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.586510] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.082 [2024-05-16 07:28:47.586570] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.082 [2024-05-16 07:28:47.586637] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fa09cddc 00:15:54.082 [2024-05-16 07:28:47.586677] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=48919b 00:15:54.082 [2024-05-16 07:28:47.586720] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.082 [2024-05-16 07:28:47.586790] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:15:54.082 [2024-05-16 07:28:47.586853] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.586913] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.082 [2024-05-16 07:28:47.586975] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.082 passed 00:15:54.082 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-16 07:28:47.587043] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.082 [2024-05-16 07:28:47.587111] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.082 [2024-05-16 07:28:47.587154] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fe72ee2969dac644 00:15:54.082 [2024-05-16 07:28:47.587206] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:15:54.083 [2024-05-16 07:28:47.587288] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:15:54.083 [2024-05-16 07:28:47.587349] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.587418] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.587479] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.587543] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.587608] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=de67 00:15:54.083 [2024-05-16 07:28:47.587661] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5084 00:15:54.083 [2024-05-16 07:28:47.587706] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:15:54.083 [2024-05-16 07:28:47.587768] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:15:54.083 [2024-05-16 07:28:47.587832] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.587900] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.587963] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.083 [2024-05-16 07:28:47.588031] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.083 [2024-05-16 07:28:47.588091] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fa09cddc 00:15:54.083 [2024-05-16 07:28:47.588136] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=48919b 00:15:54.083 [2024-05-16 07:28:47.588183] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.083 [2024-05-16 07:28:47.588250] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:15:54.083 [2024-05-16 07:28:47.588317] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.588381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.588448] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.588517] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.588584] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.083 [2024-05-16 07:28:47.588631] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fe72ee2969dac644 00:15:54.083 passed 00:15:54.083 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-16 07:28:47.588677] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:15:54.083 [2024-05-16 07:28:47.588743] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:15:54.083 [2024-05-16 07:28:47.588800] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.588868] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.588944] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.589013] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.589069] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=de67 00:15:54.083 [2024-05-16 07:28:47.589115] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5084 00:15:54.083 [2024-05-16 07:28:47.589163] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:15:54.083 [2024-05-16 07:28:47.589225] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:15:54.083 [2024-05-16 07:28:47.589293] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.589359] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.589426] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.083 [2024-05-16 07:28:47.589492] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.083 [2024-05-16 07:28:47.589560] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fa09cddc 00:15:54.083 [2024-05-16 07:28:47.589607] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=48919b 00:15:54.083 [2024-05-16 07:28:47.589653] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.083 [2024-05-16 07:28:47.589723] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:15:54.083 [2024-05-16 07:28:47.589790] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.589857] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.589919] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.589989] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.590056] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.083 passed 00:15:54.083 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-16 07:28:47.590102] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fe72ee2969dac644 00:15:54.083 [2024-05-16 07:28:47.590153] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:15:54.083 [2024-05-16 07:28:47.590222] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:15:54.083 [2024-05-16 07:28:47.590286] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.590353] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.083 [2024-05-16 07:28:47.590415] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.083 [2024-05-16 07:28:47.590483] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 [2024-05-16 07:28:47.590544] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=de67 00:15:54.084 passed 00:15:54.084 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-05-16 07:28:47.590593] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5084 00:15:54.084 [2024-05-16 07:28:47.590642] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:15:54.084 [2024-05-16 07:28:47.590703] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:15:54.084 [2024-05-16 07:28:47.590766] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.590843] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.590904] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.084 [2024-05-16 07:28:47.590972] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.084 [2024-05-16 07:28:47.591036] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fa09cddc 00:15:54.084 [2024-05-16 07:28:47.591083] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=48919b 00:15:54.084 [2024-05-16 07:28:47.591129] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.084 [2024-05-16 07:28:47.591190] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:15:54.084 [2024-05-16 07:28:47.591264] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.591328] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.591389] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 [2024-05-16 07:28:47.591479] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 passed 00:15:54.084 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-16 07:28:47.591540] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.084 [2024-05-16 07:28:47.591582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fe72ee2969dac644 00:15:54.084 [2024-05-16 07:28:47.591635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:15:54.084 [2024-05-16 07:28:47.591703] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:15:54.084 [2024-05-16 07:28:47.591775] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.591843] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.591910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 [2024-05-16 07:28:47.591979] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 [2024-05-16 07:28:47.592050] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=de67 00:15:54.084 passed 00:15:54.084 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-05-16 07:28:47.592095] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=5084 00:15:54.084 [2024-05-16 07:28:47.592140] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:15:54.084 [2024-05-16 07:28:47.592207] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:15:54.084 [2024-05-16 07:28:47.592276] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.592338] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.592400] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.084 [2024-05-16 07:28:47.592457] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:15:54.084 [2024-05-16 07:28:47.592524] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fa09cddc 00:15:54.084 [2024-05-16 07:28:47.592564] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=48919b 00:15:54.084 [2024-05-16 07:28:47.592613] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.084 [2024-05-16 07:28:47.592678] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:15:54.084 [2024-05-16 07:28:47.592738] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.592803] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.592865] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 [2024-05-16 07:28:47.592932] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:15:54.084 passed 00:15:54.084 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-05-16 07:28:47.592999] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.084 [2024-05-16 07:28:47.593043] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fe72ee2969dac644 00:15:54.084 passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:15:54.084 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:15:54.084 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:15:54.084 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-16 07:28:47.598676] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd5c, Actual=fd4c 00:15:54.084 [2024-05-16 07:28:47.598893] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=7774, Actual=7764 00:15:54.084 [2024-05-16 07:28:47.599101] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.599313] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.599514] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.084 [2024-05-16 07:28:47.599717] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.084 [2024-05-16 07:28:47.599913] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=de67 00:15:54.084 [2024-05-16 07:28:47.600115] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=f5b2 00:15:54.084 [2024-05-16 07:28:47.600317] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753fd, Actual=1ab753ed 00:15:54.084 [2024-05-16 07:28:47.600513] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=dfbd9f98, Actual=dfbd9f88 00:15:54.084 [2024-05-16 07:28:47.600707] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.600928] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.601141] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=1000000061 00:15:54.084 [2024-05-16 07:28:47.601353] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=1000000061 00:15:54.084 [2024-05-16 07:28:47.601564] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=fa09cddc 00:15:54.084 [2024-05-16 07:28:47.601770] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=3d124fc4 00:15:54.084 [2024-05-16 07:28:47.601978] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.084 [2024-05-16 07:28:47.602188] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3eddc8c396e7c4f, Actual=3eddc9c396e7c4f 00:15:54.084 [2024-05-16 07:28:47.602394] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.602600] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.084 [2024-05-16 07:28:47.602812] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.084 [2024-05-16 07:28:47.603013] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.084 [2024-05-16 07:28:47.603227] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.084 [2024-05-16 07:28:47.603439] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=4abae682b37db4b0 00:15:54.084 passed 00:15:54.084 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-16 07:28:47.603522] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd5c, Actual=fd4c 00:15:54.085 [2024-05-16 07:28:47.603588] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bea3, Actual=beb3 00:15:54.085 [2024-05-16 07:28:47.603650] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.603713] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.603777] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.085 [2024-05-16 07:28:47.603839] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.085 [2024-05-16 07:28:47.603900] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=de67 00:15:54.085 [2024-05-16 07:28:47.603964] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3c65 00:15:54.085 [2024-05-16 07:28:47.604026] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753fd, Actual=1ab753ed 00:15:54.085 [2024-05-16 07:28:47.604087] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=410d57ca, Actual=410d57da 00:15:54.085 [2024-05-16 07:28:47.604149] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.604209] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.604276] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:15:54.085 [2024-05-16 07:28:47.604338] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:15:54.085 [2024-05-16 07:28:47.604399] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=fa09cddc 00:15:54.085 [2024-05-16 07:28:47.604459] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=a3a28796 00:15:54.085 [2024-05-16 07:28:47.604524] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.085 [2024-05-16 07:28:47.604591] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=770c3e61e2ba99b2, Actual=770c3e71e2ba99b2 00:15:54.085 [2024-05-16 07:28:47.604655] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.604726] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.604792] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.085 [2024-05-16 07:28:47.604854] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.085 passed 00:15:54.085 Test: dix_sec_512_md_0_error ...[2024-05-16 07:28:47.604912] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.085 [2024-05-16 07:28:47.604976] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=3e5b046f68a9514d 00:15:54.085 [2024-05-16 07:28:47.605006] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:15:54.085 passed 00:15:54.085 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:15:54.085 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:15:54.085 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:15:54.085 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:15:54.085 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:15:54.085 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:15:54.085 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:15:54.085 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:15:54.085 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:15:54.085 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-16 07:28:47.610439] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd5c, Actual=fd4c 00:15:54.085 [2024-05-16 07:28:47.610647] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=7774, Actual=7764 00:15:54.085 [2024-05-16 07:28:47.610867] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.611068] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.611289] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.085 [2024-05-16 07:28:47.611502] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.085 [2024-05-16 07:28:47.611711] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=de67 00:15:54.085 [2024-05-16 07:28:47.611913] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=f5b2 00:15:54.085 [2024-05-16 07:28:47.612120] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753fd, Actual=1ab753ed 00:15:54.085 [2024-05-16 07:28:47.612326] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=dfbd9f98, Actual=dfbd9f88 00:15:54.085 [2024-05-16 07:28:47.612533] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.612746] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.612957] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=1000000061 00:15:54.085 [2024-05-16 07:28:47.613162] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=1000000061 00:15:54.085 [2024-05-16 07:28:47.613368] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=fa09cddc 00:15:54.085 [2024-05-16 07:28:47.613577] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=3d124fc4 00:15:54.085 [2024-05-16 07:28:47.613787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.085 [2024-05-16 07:28:47.613993] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3eddc8c396e7c4f, Actual=3eddc9c396e7c4f 00:15:54.085 [2024-05-16 07:28:47.614202] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.614410] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.614619] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.085 [2024-05-16 07:28:47.614823] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=71 00:15:54.085 [2024-05-16 07:28:47.615035] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.085 [2024-05-16 07:28:47.615254] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=4abae682b37db4b0 00:15:54.085 passed 00:15:54.085 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-16 07:28:47.615335] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd5c, Actual=fd4c 00:15:54.085 [2024-05-16 07:28:47.615398] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bea3, Actual=beb3 00:15:54.085 [2024-05-16 07:28:47.615462] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.615523] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.615583] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.085 [2024-05-16 07:28:47.615643] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.085 [2024-05-16 07:28:47.615706] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=de67 00:15:54.085 [2024-05-16 07:28:47.615768] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=3c65 00:15:54.085 [2024-05-16 07:28:47.615837] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753fd, Actual=1ab753ed 00:15:54.085 [2024-05-16 07:28:47.615897] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=410d57ca, Actual=410d57da 00:15:54.085 [2024-05-16 07:28:47.615958] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.616019] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.085 [2024-05-16 07:28:47.616085] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:15:54.085 [2024-05-16 07:28:47.616148] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=1000000059 00:15:54.085 [2024-05-16 07:28:47.616202] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=fa09cddc 00:15:54.085 [2024-05-16 07:28:47.616264] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=a3a28796 00:15:54.085 [2024-05-16 07:28:47.616329] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:15:54.085 [2024-05-16 07:28:47.616390] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=770c3e61e2ba99b2, Actual=770c3e71e2ba99b2 00:15:54.085 [2024-05-16 07:28:47.616449] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.086 [2024-05-16 07:28:47.616512] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=98 00:15:54.086 [2024-05-16 07:28:47.616572] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.086 [2024-05-16 07:28:47.616635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=49 00:15:54.086 [2024-05-16 07:28:47.616700] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=31bd9e965aae113 00:15:54.086 passed 00:15:54.086 Test: set_md_interleave_iovs_test ...[2024-05-16 07:28:47.616757] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=3e5b046f68a9514d 00:15:54.086 passed 00:15:54.086 Test: set_md_interleave_iovs_split_test ...passed 00:15:54.086 Test: dif_generate_stream_pi_16_test ...passed 00:15:54.086 Test: dif_generate_stream_test ...passed 00:15:54.086 Test: set_md_interleave_iovs_alignment_test ...[2024-05-16 07:28:47.617853] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:15:54.086 passed 00:15:54.086 Test: dif_generate_split_test ...passed 00:15:54.086 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:15:54.086 Test: dif_verify_split_test ...passed 00:15:54.086 Test: dif_verify_stream_multi_segments_test ...passed 00:15:54.086 Test: update_crc32c_pi_16_test ...passed 00:15:54.086 Test: update_crc32c_test ...passed 00:15:54.086 Test: dif_update_crc32c_split_test ...passed 00:15:54.086 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:15:54.086 Test: get_range_with_md_test ...passed 00:15:54.086 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:15:54.086 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:15:54.086 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:15:54.086 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:15:54.086 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:15:54.086 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:15:54.086 Test: dif_generate_and_verify_unmap_test ...passed 00:15:54.086 00:15:54.086 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.086 suites 1 1 n/a 0 0 00:15:54.086 tests 79 79 79 0 0 00:15:54.086 asserts 3584 3584 3584 0 n/a 00:15:54.086 00:15:54.086 Elapsed time = 0.055 seconds 00:15:54.086 07:28:47 unittest.unittest_util -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:15:54.086 00:15:54.086 00:15:54.086 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.086 http://cunit.sourceforge.net/ 00:15:54.086 00:15:54.086 00:15:54.086 Suite: iov 00:15:54.086 Test: test_single_iov ...passed 00:15:54.086 Test: test_simple_iov ...passed 00:15:54.086 Test: test_complex_iov ...passed 00:15:54.086 Test: test_iovs_to_buf ...passed 00:15:54.086 Test: test_buf_to_iovs ...passed 00:15:54.086 Test: test_memset ...passed 00:15:54.086 Test: test_iov_one ...passed 00:15:54.086 Test: test_iov_xfer ...passed 00:15:54.086 00:15:54.086 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.086 suites 1 1 n/a 0 0 00:15:54.086 tests 8 8 8 0 0 00:15:54.086 asserts 156 156 156 0 n/a 00:15:54.086 00:15:54.086 Elapsed time = 0.000 seconds 00:15:54.086 07:28:47 unittest.unittest_util -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:15:54.086 00:15:54.086 00:15:54.086 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.086 http://cunit.sourceforge.net/ 00:15:54.086 00:15:54.086 00:15:54.086 Suite: math 00:15:54.086 Test: test_serial_number_arithmetic ...passed 00:15:54.086 Suite: erase 00:15:54.086 Test: test_memset_s ...passed 00:15:54.086 00:15:54.086 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.086 suites 2 2 n/a 0 0 00:15:54.086 tests 2 2 2 0 0 00:15:54.086 asserts 18 18 18 0 n/a 00:15:54.086 00:15:54.086 Elapsed time = 0.000 seconds 00:15:54.086 07:28:47 unittest.unittest_util -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:15:54.086 00:15:54.086 00:15:54.086 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.086 http://cunit.sourceforge.net/ 00:15:54.086 00:15:54.086 00:15:54.086 Suite: pipe 00:15:54.086 Test: test_create_destroy ...passed 00:15:54.086 Test: test_write_get_buffer ...passed 00:15:54.086 Test: test_write_advance ...passed 00:15:54.086 Test: test_read_get_buffer ...passed 00:15:54.086 Test: test_read_advance ...passed 00:15:54.086 Test: test_data ...passed 00:15:54.086 00:15:54.086 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.086 suites 1 1 n/a 0 0 00:15:54.086 tests 6 6 6 0 0 00:15:54.086 asserts 251 251 251 0 n/a 00:15:54.086 00:15:54.086 Elapsed time = 0.000 seconds 00:15:54.345 07:28:47 unittest.unittest_util -- unit/unittest.sh@145 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:15:54.345 00:15:54.345 00:15:54.345 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.345 http://cunit.sourceforge.net/ 00:15:54.345 00:15:54.345 00:15:54.345 Suite: xor 00:15:54.345 Test: test_xor_gen ...passed 00:15:54.345 00:15:54.345 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.345 suites 1 1 n/a 0 0 00:15:54.345 tests 1 1 1 0 0 00:15:54.345 asserts 17 17 17 0 n/a 00:15:54.345 00:15:54.345 Elapsed time = 0.000 seconds 00:15:54.345 00:15:54.345 real 0m0.124s 00:15:54.345 user 0m0.080s 00:15:54.345 sys 0m0.048s 00:15:54.345 07:28:47 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.345 ************************************ 00:15:54.345 END TEST unittest_util 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 07:28:47 unittest -- unit/unittest.sh@283 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:15:54.345 07:28:47 unittest -- unit/unittest.sh@286 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 START TEST unittest_dma 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:15:54.345 00:15:54.345 00:15:54.345 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.345 http://cunit.sourceforge.net/ 00:15:54.345 00:15:54.345 00:15:54.345 Suite: dma_suite 00:15:54.345 Test: test_dma ...passed 00:15:54.345 00:15:54.345 [2024-05-16 07:28:47.690594] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:15:54.345 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.345 suites 1 1 n/a 0 0 00:15:54.345 tests 1 1 1 0 0 00:15:54.345 asserts 54 54 54 0 n/a 00:15:54.345 00:15:54.345 Elapsed time = 0.000 seconds 00:15:54.345 00:15:54.345 real 0m0.005s 00:15:54.345 user 0m0.005s 00:15:54.345 sys 0m0.004s 00:15:54.345 07:28:47 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.345 07:28:47 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 END TEST unittest_dma 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest -- unit/unittest.sh@288 -- # run_test unittest_init unittest_init 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 START TEST unittest_init 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:15:54.345 07:28:47 unittest.unittest_init -- unit/unittest.sh@149 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:15:54.345 00:15:54.345 00:15:54.345 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.345 http://cunit.sourceforge.net/ 00:15:54.345 00:15:54.345 00:15:54.345 Suite: subsystem_suite 00:15:54.345 Test: subsystem_sort_test_depends_on_single ...passed 00:15:54.345 Test: subsystem_sort_test_depends_on_multiple ...passed 00:15:54.345 Test: subsystem_sort_test_missing_dependency ...[2024-05-16 07:28:47.731696] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:15:54.345 passed 00:15:54.345 00:15:54.345 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.345 suites 1 1 n/a 0 0 00:15:54.345 tests 3 3 3 0 0 00:15:54.345 asserts 20 20 20 0 n/a 00:15:54.345 00:15:54.345 Elapsed time = 0.000 seconds 00:15:54.345 [2024-05-16 07:28:47.731890] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:15:54.345 00:15:54.345 real 0m0.005s 00:15:54.345 user 0m0.005s 00:15:54.345 sys 0m0.000s 00:15:54.345 07:28:47 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.345 07:28:47 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 END TEST unittest_init 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest -- unit/unittest.sh@289 -- # run_test unittest_keyring /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.345 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 START TEST unittest_keyring 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:15:54.345 00:15:54.345 00:15:54.345 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.345 http://cunit.sourceforge.net/ 00:15:54.345 00:15:54.345 00:15:54.345 Suite: keyring 00:15:54.345 Test: test_keyring_add_remove ...[2024-05-16 07:28:47.769369] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:15:54.345 passed 00:15:54.345 Test: test_keyring_get_put ...passed 00:15:54.345 00:15:54.345 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.345 suites 1 1 n/a 0 0 00:15:54.345 tests 2 2 2 0 0 00:15:54.345 asserts 44 44 44 0 n/a 00:15:54.345 00:15:54.345 Elapsed time = 0.000 seconds 00:15:54.345 [2024-05-16 07:28:47.769554] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:15:54.345 [2024-05-16 07:28:47.769569] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:54.345 00:15:54.345 real 0m0.005s 00:15:54.345 user 0m0.000s 00:15:54.345 sys 0m0.008s 00:15:54.345 07:28:47 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.345 07:28:47 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:15:54.345 ************************************ 00:15:54.345 END TEST unittest_keyring 00:15:54.345 ************************************ 00:15:54.345 07:28:47 unittest -- unit/unittest.sh@291 -- # '[' no = yes ']' 00:15:54.345 00:15:54.345 00:15:54.345 ===================== 00:15:54.345 All unit tests passed 00:15:54.345 ===================== 00:15:54.345 07:28:47 unittest -- unit/unittest.sh@304 -- # set +x 00:15:54.345 WARN: lcov not installed or SPDK built without coverage! 00:15:54.345 WARN: neither valgrind nor ASAN is enabled! 00:15:54.345 00:15:54.345 00:15:54.345 00:15:54.345 real 0m13.525s 00:15:54.345 user 0m11.125s 00:15:54.346 sys 0m1.234s 00:15:54.346 07:28:47 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.346 ************************************ 00:15:54.346 END TEST unittest 00:15:54.346 07:28:47 unittest -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 ************************************ 00:15:54.346 07:28:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:15:54.346 07:28:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:54.346 07:28:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:54.346 07:28:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:15:54.346 07:28:47 -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:54.346 07:28:47 -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 07:28:47 -- spdk/autotest.sh@164 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:54.346 07:28:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.346 07:28:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.346 07:28:47 -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 ************************************ 00:15:54.346 START TEST env 00:15:54.346 ************************************ 00:15:54.346 07:28:47 env -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:54.912 * Looking for test storage... 00:15:54.912 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:15:54.912 07:28:48 env -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:54.912 07:28:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.912 07:28:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.912 07:28:48 env -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 ************************************ 00:15:54.912 START TEST env_memory 00:15:54.912 ************************************ 00:15:54.912 07:28:48 env.env_memory -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:54.912 00:15:54.912 00:15:54.912 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.912 http://cunit.sourceforge.net/ 00:15:54.912 00:15:54.912 00:15:54.912 Suite: memory 00:15:54.912 Test: alloc and free memory map ...[2024-05-16 07:28:48.315006] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:54.912 passed 00:15:54.912 Test: mem map translation ...[2024-05-16 07:28:48.322127] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:54.912 [2024-05-16 07:28:48.322162] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:54.912 [2024-05-16 07:28:48.322178] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:54.912 [2024-05-16 07:28:48.322187] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:54.912 passed 00:15:54.912 Test: mem map registration ...[2024-05-16 07:28:48.330760] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:15:54.912 [2024-05-16 07:28:48.330785] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:15:54.912 passed 00:15:54.912 Test: mem map adjacent registrations ...passed 00:15:54.912 00:15:54.912 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.912 suites 1 1 n/a 0 0 00:15:54.912 tests 4 4 4 0 0 00:15:54.912 asserts 152 152 152 0 n/a 00:15:54.912 00:15:54.912 Elapsed time = 0.039 seconds 00:15:54.912 00:15:54.912 real 0m0.042s 00:15:54.912 user 0m0.024s 00:15:54.912 sys 0m0.018s 00:15:54.912 07:28:48 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.912 07:28:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 ************************************ 00:15:54.912 END TEST env_memory 00:15:54.912 ************************************ 00:15:54.912 07:28:48 env -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:54.912 07:28:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:54.912 07:28:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.912 07:28:48 env -- common/autotest_common.sh@10 -- # set +x 00:15:54.912 ************************************ 00:15:54.912 START TEST env_vtophys 00:15:54.912 ************************************ 00:15:54.912 07:28:48 env.env_vtophys -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:54.912 EAL: lib.eal log level changed from notice to debug 00:15:54.912 EAL: Sysctl reports 10 cpus 00:15:54.912 EAL: Detected lcore 0 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 1 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 2 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 3 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 4 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 5 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 6 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 7 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 8 as core 0 on socket 0 00:15:54.912 EAL: Detected lcore 9 as core 0 on socket 0 00:15:54.912 EAL: Maximum logical cores by configuration: 128 00:15:54.912 EAL: Detected CPU lcores: 10 00:15:54.912 EAL: Detected NUMA nodes: 1 00:15:54.912 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:15:54.912 EAL: Checking presence of .so 'librte_eal.so.24' 00:15:54.912 EAL: Checking presence of .so 'librte_eal.so' 00:15:54.912 EAL: Detected static linkage of DPDK 00:15:54.912 EAL: No shared files mode enabled, IPC will be disabled 00:15:54.912 EAL: PCI scan found 10 devices 00:15:54.912 EAL: Specific IOVA mode is not requested, autodetecting 00:15:54.912 EAL: Selecting IOVA mode according to bus requests 00:15:54.912 EAL: Bus pci wants IOVA as 'PA' 00:15:54.912 EAL: Selected IOVA mode 'PA' 00:15:54.912 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:15:54.912 EAL: Ask a virtual area of 0x2e000 bytes 00:15:54.912 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000d70000) not respected! 00:15:54.912 EAL: This may cause issues with mapping memory into secondary processes 00:15:54.912 EAL: Virtual area found at 0x1000d70000 (size = 0x2e000) 00:15:54.912 EAL: Setting up physically contiguous memory... 00:15:54.912 EAL: Ask a virtual area of 0x1000 bytes 00:15:54.912 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1001b18000) not respected! 00:15:54.912 EAL: This may cause issues with mapping memory into secondary processes 00:15:54.912 EAL: Virtual area found at 0x1001b18000 (size = 0x1000) 00:15:54.912 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:15:54.912 EAL: Ask a virtual area of 0xf0000000 bytes 00:15:54.912 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:15:54.912 EAL: This may cause issues with mapping memory into secondary processes 00:15:54.912 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:15:54.912 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:15:55.171 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x190000000, len 268435456 00:15:55.171 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x1a0000000, len 268435456 00:15:55.171 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x1b0000000, len 268435456 00:15:55.171 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x1c0000000, len 268435456 00:15:55.171 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x1d0000000, len 268435456 00:15:55.429 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x1e0000000, len 268435456 00:15:55.429 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x1f0000000, len 268435456 00:15:55.429 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x200000000, len 268435456 00:15:55.429 EAL: No shared files mode enabled, IPC is disabled 00:15:55.429 EAL: Added 2048M to heap on socket 0 00:15:55.429 EAL: TSC is not safe to use in SMP mode 00:15:55.429 EAL: TSC is not invariant 00:15:55.429 EAL: TSC frequency is ~2100006 KHz 00:15:55.429 EAL: Main lcore 0 is ready (tid=82b675000;cpuset=[0]) 00:15:55.429 EAL: PCI scan found 10 devices 00:15:55.429 EAL: Registering mem event callbacks not supported 00:15:55.429 00:15:55.429 00:15:55.429 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.429 http://cunit.sourceforge.net/ 00:15:55.429 00:15:55.429 00:15:55.429 Suite: components_suite 00:15:55.429 Test: vtophys_malloc_test ...passed 00:15:55.687 Test: vtophys_spdk_malloc_test ...passed 00:15:55.688 00:15:55.688 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.688 suites 1 1 n/a 0 0 00:15:55.688 tests 2 2 2 0 0 00:15:55.688 asserts 497 497 497 0 n/a 00:15:55.688 00:15:55.688 Elapsed time = 0.312 seconds 00:15:55.946 00:15:55.946 real 0m0.877s 00:15:55.946 user 0m0.322s 00:15:55.946 sys 0m0.553s 00:15:55.946 07:28:49 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.946 07:28:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:55.946 ************************************ 00:15:55.946 END TEST env_vtophys 00:15:55.946 ************************************ 00:15:55.946 07:28:49 env -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:55.946 07:28:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:55.946 07:28:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.946 07:28:49 env -- common/autotest_common.sh@10 -- # set +x 00:15:55.946 ************************************ 00:15:55.946 START TEST env_pci 00:15:55.946 ************************************ 00:15:55.946 07:28:49 env.env_pci -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:55.946 00:15:55.946 00:15:55.946 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.946 http://cunit.sourceforge.net/ 00:15:55.946 00:15:55.946 00:15:55.946 Suite: pci 00:15:55.946 Test: pci_hook ...passed 00:15:55.946 00:15:55.946 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.946 suites 1 1 n/a 0 0 00:15:55.946 tests 1 1 1 0 0 00:15:55.946 asserts 25 25 25 0 n/a 00:15:55.946 00:15:55.946 Elapsed time = 0.000 seconds 00:15:55.946 EAL: Cannot find device (10000:00:01.0) 00:15:55.946 EAL: Failed to attach device on primary process 00:15:55.946 00:15:55.946 real 0m0.010s 00:15:55.946 user 0m0.000s 00:15:55.946 sys 0m0.014s 00:15:55.946 07:28:49 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.946 07:28:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:55.946 ************************************ 00:15:55.946 END TEST env_pci 00:15:55.946 ************************************ 00:15:55.946 07:28:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:55.946 07:28:49 env -- env/env.sh@15 -- # uname 00:15:55.946 07:28:49 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:15:55.946 07:28:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:15:55.946 07:28:49 env -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:55.946 07:28:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.946 07:28:49 env -- common/autotest_common.sh@10 -- # set +x 00:15:55.946 ************************************ 00:15:55.946 START TEST env_dpdk_post_init 00:15:55.946 ************************************ 00:15:55.946 07:28:49 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:15:55.946 EAL: Sysctl reports 10 cpus 00:15:55.946 EAL: Detected CPU lcores: 10 00:15:55.946 EAL: Detected NUMA nodes: 1 00:15:55.946 EAL: Detected static linkage of DPDK 00:15:55.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:55.946 EAL: Selected IOVA mode 'PA' 00:15:55.946 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:15:55.946 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x190000000, len 268435456 00:15:55.946 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x1a0000000, len 268435456 00:15:56.204 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x1b0000000, len 268435456 00:15:56.204 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x1c0000000, len 268435456 00:15:56.204 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x1d0000000, len 268435456 00:15:56.204 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x1e0000000, len 268435456 00:15:56.204 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x1f0000000, len 268435456 00:15:56.462 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x200000000, len 268435456 00:15:56.462 EAL: TSC is not safe to use in SMP mode 00:15:56.462 EAL: TSC is not invariant 00:15:56.462 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:56.462 [2024-05-16 07:28:49.824136] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:56.462 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:56.462 Starting DPDK initialization... 00:15:56.462 Starting SPDK post initialization... 00:15:56.462 SPDK NVMe probe 00:15:56.463 Attaching to 0000:00:10.0 00:15:56.463 Attached to 0000:00:10.0 00:15:56.463 Cleaning up... 00:15:56.463 00:15:56.463 real 0m0.520s 00:15:56.463 user 0m0.012s 00:15:56.463 sys 0m0.503s 00:15:56.463 07:28:49 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.463 07:28:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:56.463 ************************************ 00:15:56.463 END TEST env_dpdk_post_init 00:15:56.463 ************************************ 00:15:56.463 07:28:49 env -- env/env.sh@26 -- # uname 00:15:56.463 07:28:49 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:15:56.463 00:15:56.463 real 0m2.070s 00:15:56.463 user 0m0.537s 00:15:56.463 sys 0m1.615s 00:15:56.463 07:28:49 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.463 ************************************ 00:15:56.463 END TEST env 00:15:56.463 ************************************ 00:15:56.463 07:28:49 env -- common/autotest_common.sh@10 -- # set +x 00:15:56.463 07:28:49 -- spdk/autotest.sh@165 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:56.463 07:28:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:56.463 07:28:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:56.463 07:28:49 -- common/autotest_common.sh@10 -- # set +x 00:15:56.463 ************************************ 00:15:56.463 START TEST rpc 00:15:56.463 ************************************ 00:15:56.463 07:28:49 rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:56.721 * Looking for test storage... 00:15:56.721 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:15:56.721 07:28:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=46345 00:15:56.721 07:28:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:56.721 07:28:50 rpc -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:56.721 07:28:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 46345 00:15:56.721 07:28:50 rpc -- common/autotest_common.sh@827 -- # '[' -z 46345 ']' 00:15:56.721 07:28:50 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.721 07:28:50 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.721 07:28:50 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.721 07:28:50 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.721 07:28:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.721 [2024-05-16 07:28:50.120702] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:15:56.721 [2024-05-16 07:28:50.120929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:57.287 EAL: TSC is not safe to use in SMP mode 00:15:57.287 EAL: TSC is not invariant 00:15:57.287 [2024-05-16 07:28:50.607112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.287 [2024-05-16 07:28:50.691448] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:57.287 [2024-05-16 07:28:50.693648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:57.287 [2024-05-16 07:28:50.693682] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 46345' to capture a snapshot of events at runtime. 00:15:57.287 [2024-05-16 07:28:50.693710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.851 07:28:51 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:57.851 07:28:51 rpc -- common/autotest_common.sh@860 -- # return 0 00:15:57.851 07:28:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:15:57.851 07:28:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:15:57.851 07:28:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:57.851 07:28:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:57.852 07:28:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:57.852 07:28:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:57.852 07:28:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 ************************************ 00:15:57.852 START TEST rpc_integrity 00:15:57.852 ************************************ 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:57.852 { 00:15:57.852 "name": "Malloc0", 00:15:57.852 "aliases": [ 00:15:57.852 "f14efa3b-1355-11ef-8e8f-9dd684e56d79" 00:15:57.852 ], 00:15:57.852 "product_name": "Malloc disk", 00:15:57.852 "block_size": 512, 00:15:57.852 "num_blocks": 16384, 00:15:57.852 "uuid": "f14efa3b-1355-11ef-8e8f-9dd684e56d79", 00:15:57.852 "assigned_rate_limits": { 00:15:57.852 "rw_ios_per_sec": 0, 00:15:57.852 "rw_mbytes_per_sec": 0, 00:15:57.852 "r_mbytes_per_sec": 0, 00:15:57.852 "w_mbytes_per_sec": 0 00:15:57.852 }, 00:15:57.852 "claimed": false, 00:15:57.852 "zoned": false, 00:15:57.852 "supported_io_types": { 00:15:57.852 "read": true, 00:15:57.852 "write": true, 00:15:57.852 "unmap": true, 00:15:57.852 "write_zeroes": true, 00:15:57.852 "flush": true, 00:15:57.852 "reset": true, 00:15:57.852 "compare": false, 00:15:57.852 "compare_and_write": false, 00:15:57.852 "abort": true, 00:15:57.852 "nvme_admin": false, 00:15:57.852 "nvme_io": false 00:15:57.852 }, 00:15:57.852 "memory_domains": [ 00:15:57.852 { 00:15:57.852 "dma_device_id": "system", 00:15:57.852 "dma_device_type": 1 00:15:57.852 }, 00:15:57.852 { 00:15:57.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.852 "dma_device_type": 2 00:15:57.852 } 00:15:57.852 ], 00:15:57.852 "driver_specific": {} 00:15:57.852 } 00:15:57.852 ]' 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 [2024-05-16 07:28:51.372721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:57.852 [2024-05-16 07:28:51.372797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.852 [2024-05-16 07:28:51.373787] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ab2ca00 00:15:57.852 [2024-05-16 07:28:51.373833] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.852 [2024-05-16 07:28:51.374809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.852 [2024-05-16 07:28:51.374859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:57.852 Passthru0 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:57.852 { 00:15:57.852 "name": "Malloc0", 00:15:57.852 "aliases": [ 00:15:57.852 "f14efa3b-1355-11ef-8e8f-9dd684e56d79" 00:15:57.852 ], 00:15:57.852 "product_name": "Malloc disk", 00:15:57.852 "block_size": 512, 00:15:57.852 "num_blocks": 16384, 00:15:57.852 "uuid": "f14efa3b-1355-11ef-8e8f-9dd684e56d79", 00:15:57.852 "assigned_rate_limits": { 00:15:57.852 "rw_ios_per_sec": 0, 00:15:57.852 "rw_mbytes_per_sec": 0, 00:15:57.852 "r_mbytes_per_sec": 0, 00:15:57.852 "w_mbytes_per_sec": 0 00:15:57.852 }, 00:15:57.852 "claimed": true, 00:15:57.852 "claim_type": "exclusive_write", 00:15:57.852 "zoned": false, 00:15:57.852 "supported_io_types": { 00:15:57.852 "read": true, 00:15:57.852 "write": true, 00:15:57.852 "unmap": true, 00:15:57.852 "write_zeroes": true, 00:15:57.852 "flush": true, 00:15:57.852 "reset": true, 00:15:57.852 "compare": false, 00:15:57.852 "compare_and_write": false, 00:15:57.852 "abort": true, 00:15:57.852 "nvme_admin": false, 00:15:57.852 "nvme_io": false 00:15:57.852 }, 00:15:57.852 "memory_domains": [ 00:15:57.852 { 00:15:57.852 "dma_device_id": "system", 00:15:57.852 "dma_device_type": 1 00:15:57.852 }, 00:15:57.852 { 00:15:57.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.852 "dma_device_type": 2 00:15:57.852 } 00:15:57.852 ], 00:15:57.852 "driver_specific": {} 00:15:57.852 }, 00:15:57.852 { 00:15:57.852 "name": "Passthru0", 00:15:57.852 "aliases": [ 00:15:57.852 "017139e4-0b6e-c757-877d-275734677281" 00:15:57.852 ], 00:15:57.852 "product_name": "passthru", 00:15:57.852 "block_size": 512, 00:15:57.852 "num_blocks": 16384, 00:15:57.852 "uuid": "017139e4-0b6e-c757-877d-275734677281", 00:15:57.852 "assigned_rate_limits": { 00:15:57.852 "rw_ios_per_sec": 0, 00:15:57.852 "rw_mbytes_per_sec": 0, 00:15:57.852 "r_mbytes_per_sec": 0, 00:15:57.852 "w_mbytes_per_sec": 0 00:15:57.852 }, 00:15:57.852 "claimed": false, 00:15:57.852 "zoned": false, 00:15:57.852 "supported_io_types": { 00:15:57.852 "read": true, 00:15:57.852 "write": true, 00:15:57.852 "unmap": true, 00:15:57.852 "write_zeroes": true, 00:15:57.852 "flush": true, 00:15:57.852 "reset": true, 00:15:57.852 "compare": false, 00:15:57.852 "compare_and_write": false, 00:15:57.852 "abort": true, 00:15:57.852 "nvme_admin": false, 00:15:57.852 "nvme_io": false 00:15:57.852 }, 00:15:57.852 "memory_domains": [ 00:15:57.852 { 00:15:57.852 "dma_device_id": "system", 00:15:57.852 "dma_device_type": 1 00:15:57.852 }, 00:15:57.852 { 00:15:57.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.852 "dma_device_type": 2 00:15:57.852 } 00:15:57.852 ], 00:15:57.852 "driver_specific": { 00:15:57.852 "passthru": { 00:15:57.852 "name": "Passthru0", 00:15:57.852 "base_bdev_name": "Malloc0" 00:15:57.852 } 00:15:57.852 } 00:15:57.852 } 00:15:57.852 ]' 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.110 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.110 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:58.110 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.110 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.111 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:58.111 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:58.111 07:28:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:58.111 00:15:58.111 real 0m0.131s 00:15:58.111 user 0m0.057s 00:15:58.111 sys 0m0.012s 00:15:58.111 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.111 ************************************ 00:15:58.111 END TEST rpc_integrity 00:15:58.111 07:28:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 ************************************ 00:15:58.111 07:28:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:58.111 07:28:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.111 07:28:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.111 07:28:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 ************************************ 00:15:58.111 START TEST rpc_plugins 00:15:58.111 ************************************ 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:58.111 { 00:15:58.111 "name": "Malloc1", 00:15:58.111 "aliases": [ 00:15:58.111 "f166c732-1355-11ef-8e8f-9dd684e56d79" 00:15:58.111 ], 00:15:58.111 "product_name": "Malloc disk", 00:15:58.111 "block_size": 4096, 00:15:58.111 "num_blocks": 256, 00:15:58.111 "uuid": "f166c732-1355-11ef-8e8f-9dd684e56d79", 00:15:58.111 "assigned_rate_limits": { 00:15:58.111 "rw_ios_per_sec": 0, 00:15:58.111 "rw_mbytes_per_sec": 0, 00:15:58.111 "r_mbytes_per_sec": 0, 00:15:58.111 "w_mbytes_per_sec": 0 00:15:58.111 }, 00:15:58.111 "claimed": false, 00:15:58.111 "zoned": false, 00:15:58.111 "supported_io_types": { 00:15:58.111 "read": true, 00:15:58.111 "write": true, 00:15:58.111 "unmap": true, 00:15:58.111 "write_zeroes": true, 00:15:58.111 "flush": true, 00:15:58.111 "reset": true, 00:15:58.111 "compare": false, 00:15:58.111 "compare_and_write": false, 00:15:58.111 "abort": true, 00:15:58.111 "nvme_admin": false, 00:15:58.111 "nvme_io": false 00:15:58.111 }, 00:15:58.111 "memory_domains": [ 00:15:58.111 { 00:15:58.111 "dma_device_id": "system", 00:15:58.111 "dma_device_type": 1 00:15:58.111 }, 00:15:58.111 { 00:15:58.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.111 "dma_device_type": 2 00:15:58.111 } 00:15:58.111 ], 00:15:58.111 "driver_specific": {} 00:15:58.111 } 00:15:58.111 ]' 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:58.111 07:28:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:58.111 00:15:58.111 real 0m0.075s 00:15:58.111 user 0m0.026s 00:15:58.111 sys 0m0.001s 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.111 ************************************ 00:15:58.111 END TEST rpc_plugins 00:15:58.111 07:28:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 ************************************ 00:15:58.111 07:28:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:58.111 07:28:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.111 07:28:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.111 07:28:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 ************************************ 00:15:58.111 START TEST rpc_trace_cmd_test 00:15:58.111 ************************************ 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:58.111 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid46345", 00:15:58.111 "tpoint_group_mask": "0x8", 00:15:58.111 "iscsi_conn": { 00:15:58.111 "mask": "0x2", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "scsi": { 00:15:58.111 "mask": "0x4", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "bdev": { 00:15:58.111 "mask": "0x8", 00:15:58.111 "tpoint_mask": "0xffffffffffffffff" 00:15:58.111 }, 00:15:58.111 "nvmf_rdma": { 00:15:58.111 "mask": "0x10", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "nvmf_tcp": { 00:15:58.111 "mask": "0x20", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "blobfs": { 00:15:58.111 "mask": "0x80", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "dsa": { 00:15:58.111 "mask": "0x200", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "thread": { 00:15:58.111 "mask": "0x400", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "nvme_pcie": { 00:15:58.111 "mask": "0x800", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "iaa": { 00:15:58.111 "mask": "0x1000", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "nvme_tcp": { 00:15:58.111 "mask": "0x2000", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "bdev_nvme": { 00:15:58.111 "mask": "0x4000", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 }, 00:15:58.111 "sock": { 00:15:58.111 "mask": "0x8000", 00:15:58.111 "tpoint_mask": "0x0" 00:15:58.111 } 00:15:58.111 }' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:58.111 00:15:58.111 real 0m0.062s 00:15:58.111 user 0m0.030s 00:15:58.111 sys 0m0.029s 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.111 07:28:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.111 ************************************ 00:15:58.111 END TEST rpc_trace_cmd_test 00:15:58.111 ************************************ 00:15:58.369 07:28:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:58.369 07:28:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:58.369 07:28:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:58.369 07:28:51 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.369 07:28:51 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.369 07:28:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.369 ************************************ 00:15:58.369 START TEST rpc_daemon_integrity 00:15:58.369 ************************************ 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.369 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:58.369 { 00:15:58.369 "name": "Malloc2", 00:15:58.369 "aliases": [ 00:15:58.369 "f1885827-1355-11ef-8e8f-9dd684e56d79" 00:15:58.369 ], 00:15:58.369 "product_name": "Malloc disk", 00:15:58.369 "block_size": 512, 00:15:58.369 "num_blocks": 16384, 00:15:58.369 "uuid": "f1885827-1355-11ef-8e8f-9dd684e56d79", 00:15:58.369 "assigned_rate_limits": { 00:15:58.369 "rw_ios_per_sec": 0, 00:15:58.369 "rw_mbytes_per_sec": 0, 00:15:58.369 "r_mbytes_per_sec": 0, 00:15:58.369 "w_mbytes_per_sec": 0 00:15:58.369 }, 00:15:58.369 "claimed": false, 00:15:58.369 "zoned": false, 00:15:58.369 "supported_io_types": { 00:15:58.369 "read": true, 00:15:58.369 "write": true, 00:15:58.369 "unmap": true, 00:15:58.369 "write_zeroes": true, 00:15:58.369 "flush": true, 00:15:58.369 "reset": true, 00:15:58.369 "compare": false, 00:15:58.369 "compare_and_write": false, 00:15:58.369 "abort": true, 00:15:58.369 "nvme_admin": false, 00:15:58.369 "nvme_io": false 00:15:58.369 }, 00:15:58.369 "memory_domains": [ 00:15:58.369 { 00:15:58.369 "dma_device_id": "system", 00:15:58.369 "dma_device_type": 1 00:15:58.369 }, 00:15:58.369 { 00:15:58.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.370 "dma_device_type": 2 00:15:58.370 } 00:15:58.370 ], 00:15:58.370 "driver_specific": {} 00:15:58.370 } 00:15:58.370 ]' 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 [2024-05-16 07:28:51.748724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:58.370 [2024-05-16 07:28:51.748780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.370 [2024-05-16 07:28:51.748813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ab2ca00 00:15:58.370 [2024-05-16 07:28:51.748835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.370 [2024-05-16 07:28:51.749401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.370 [2024-05-16 07:28:51.749437] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:58.370 Passthru0 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:58.370 { 00:15:58.370 "name": "Malloc2", 00:15:58.370 "aliases": [ 00:15:58.370 "f1885827-1355-11ef-8e8f-9dd684e56d79" 00:15:58.370 ], 00:15:58.370 "product_name": "Malloc disk", 00:15:58.370 "block_size": 512, 00:15:58.370 "num_blocks": 16384, 00:15:58.370 "uuid": "f1885827-1355-11ef-8e8f-9dd684e56d79", 00:15:58.370 "assigned_rate_limits": { 00:15:58.370 "rw_ios_per_sec": 0, 00:15:58.370 "rw_mbytes_per_sec": 0, 00:15:58.370 "r_mbytes_per_sec": 0, 00:15:58.370 "w_mbytes_per_sec": 0 00:15:58.370 }, 00:15:58.370 "claimed": true, 00:15:58.370 "claim_type": "exclusive_write", 00:15:58.370 "zoned": false, 00:15:58.370 "supported_io_types": { 00:15:58.370 "read": true, 00:15:58.370 "write": true, 00:15:58.370 "unmap": true, 00:15:58.370 "write_zeroes": true, 00:15:58.370 "flush": true, 00:15:58.370 "reset": true, 00:15:58.370 "compare": false, 00:15:58.370 "compare_and_write": false, 00:15:58.370 "abort": true, 00:15:58.370 "nvme_admin": false, 00:15:58.370 "nvme_io": false 00:15:58.370 }, 00:15:58.370 "memory_domains": [ 00:15:58.370 { 00:15:58.370 "dma_device_id": "system", 00:15:58.370 "dma_device_type": 1 00:15:58.370 }, 00:15:58.370 { 00:15:58.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.370 "dma_device_type": 2 00:15:58.370 } 00:15:58.370 ], 00:15:58.370 "driver_specific": {} 00:15:58.370 }, 00:15:58.370 { 00:15:58.370 "name": "Passthru0", 00:15:58.370 "aliases": [ 00:15:58.370 "d73b2f88-1698-645b-ae70-5ede5207efad" 00:15:58.370 ], 00:15:58.370 "product_name": "passthru", 00:15:58.370 "block_size": 512, 00:15:58.370 "num_blocks": 16384, 00:15:58.370 "uuid": "d73b2f88-1698-645b-ae70-5ede5207efad", 00:15:58.370 "assigned_rate_limits": { 00:15:58.370 "rw_ios_per_sec": 0, 00:15:58.370 "rw_mbytes_per_sec": 0, 00:15:58.370 "r_mbytes_per_sec": 0, 00:15:58.370 "w_mbytes_per_sec": 0 00:15:58.370 }, 00:15:58.370 "claimed": false, 00:15:58.370 "zoned": false, 00:15:58.370 "supported_io_types": { 00:15:58.370 "read": true, 00:15:58.370 "write": true, 00:15:58.370 "unmap": true, 00:15:58.370 "write_zeroes": true, 00:15:58.370 "flush": true, 00:15:58.370 "reset": true, 00:15:58.370 "compare": false, 00:15:58.370 "compare_and_write": false, 00:15:58.370 "abort": true, 00:15:58.370 "nvme_admin": false, 00:15:58.370 "nvme_io": false 00:15:58.370 }, 00:15:58.370 "memory_domains": [ 00:15:58.370 { 00:15:58.370 "dma_device_id": "system", 00:15:58.370 "dma_device_type": 1 00:15:58.370 }, 00:15:58.370 { 00:15:58.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.370 "dma_device_type": 2 00:15:58.370 } 00:15:58.370 ], 00:15:58.370 "driver_specific": { 00:15:58.370 "passthru": { 00:15:58.370 "name": "Passthru0", 00:15:58.370 "base_bdev_name": "Malloc2" 00:15:58.370 } 00:15:58.370 } 00:15:58.370 } 00:15:58.370 ]' 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:58.370 00:15:58.370 real 0m0.122s 00:15:58.370 user 0m0.041s 00:15:58.370 sys 0m0.019s 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.370 07:28:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:58.370 ************************************ 00:15:58.370 END TEST rpc_daemon_integrity 00:15:58.370 ************************************ 00:15:58.370 07:28:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:58.370 07:28:51 rpc -- rpc/rpc.sh@84 -- # killprocess 46345 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@946 -- # '[' -z 46345 ']' 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@950 -- # kill -0 46345 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@951 -- # uname 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@954 -- # ps -c -o command 46345 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@954 -- # tail -1 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:15:58.370 killing process with pid 46345 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46345' 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@965 -- # kill 46345 00:15:58.370 07:28:51 rpc -- common/autotest_common.sh@970 -- # wait 46345 00:15:58.642 00:15:58.642 real 0m2.153s 00:15:58.642 user 0m2.434s 00:15:58.642 sys 0m0.829s 00:15:58.642 07:28:52 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:58.642 07:28:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.642 ************************************ 00:15:58.642 END TEST rpc 00:15:58.642 ************************************ 00:15:58.642 07:28:52 -- spdk/autotest.sh@166 -- # run_test skip_rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:58.642 07:28:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.642 07:28:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.642 07:28:52 -- common/autotest_common.sh@10 -- # set +x 00:15:58.642 ************************************ 00:15:58.642 START TEST skip_rpc 00:15:58.642 ************************************ 00:15:58.642 07:28:52 skip_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:58.900 * Looking for test storage... 00:15:58.900 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:15:58.900 07:28:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:58.900 07:28:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:58.900 07:28:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:58.900 07:28:52 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:58.900 07:28:52 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:58.900 07:28:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.900 ************************************ 00:15:58.900 START TEST skip_rpc 00:15:58.900 ************************************ 00:15:58.900 07:28:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:15:58.900 07:28:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=46521 00:15:58.900 07:28:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:58.900 07:28:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:58.900 07:28:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:58.900 [2024-05-16 07:28:52.371476] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:15:58.900 [2024-05-16 07:28:52.371647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:59.463 EAL: TSC is not safe to use in SMP mode 00:15:59.463 EAL: TSC is not invariant 00:15:59.463 [2024-05-16 07:28:52.870991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.464 [2024-05-16 07:28:52.955631] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:59.464 [2024-05-16 07:28:52.957868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 46521 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 46521 ']' 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 46521 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 46521 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # tail -1 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:04.721 killing process with pid 46521 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46521' 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 46521 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 46521 00:16:04.721 00:16:04.721 real 0m5.363s 00:16:04.721 user 0m4.828s 00:16:04.721 sys 0m0.553s 00:16:04.721 ************************************ 00:16:04.721 END TEST skip_rpc 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:04.721 07:28:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.721 ************************************ 00:16:04.722 07:28:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:16:04.722 07:28:57 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:04.722 07:28:57 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:04.722 07:28:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.722 ************************************ 00:16:04.722 START TEST skip_rpc_with_json 00:16:04.722 ************************************ 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=46566 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 46566 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 46566 ']' 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:04.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:04.722 07:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:04.722 [2024-05-16 07:28:57.772418] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:04.722 [2024-05-16 07:28:57.772682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:04.722 EAL: TSC is not safe to use in SMP mode 00:16:04.722 EAL: TSC is not invariant 00:16:04.722 [2024-05-16 07:28:58.249356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.980 [2024-05-16 07:28:58.335926] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:04.980 [2024-05-16 07:28:58.338077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:05.549 [2024-05-16 07:28:58.823444] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:16:05.549 request: 00:16:05.549 { 00:16:05.549 "trtype": "tcp", 00:16:05.549 "method": "nvmf_get_transports", 00:16:05.549 "req_id": 1 00:16:05.549 } 00:16:05.549 Got JSON-RPC error response 00:16:05.549 response: 00:16:05.549 { 00:16:05.549 "code": -19, 00:16:05.549 "message": "Operation not supported by device" 00:16:05.549 } 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:05.549 [2024-05-16 07:28:58.835456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.549 07:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:05.549 { 00:16:05.549 "subsystems": [ 00:16:05.549 { 00:16:05.549 "subsystem": "vmd", 00:16:05.549 "config": [] 00:16:05.549 }, 00:16:05.549 { 00:16:05.549 "subsystem": "iobuf", 00:16:05.549 "config": [ 00:16:05.549 { 00:16:05.549 "method": "iobuf_set_options", 00:16:05.549 "params": { 00:16:05.549 "small_pool_count": 8192, 00:16:05.549 "large_pool_count": 1024, 00:16:05.549 "small_bufsize": 8192, 00:16:05.549 "large_bufsize": 135168 00:16:05.549 } 00:16:05.549 } 00:16:05.549 ] 00:16:05.549 }, 00:16:05.549 { 00:16:05.549 "subsystem": "scheduler", 00:16:05.549 "config": [ 00:16:05.549 { 00:16:05.549 "method": "framework_set_scheduler", 00:16:05.549 "params": { 00:16:05.549 "name": "static" 00:16:05.549 } 00:16:05.549 } 00:16:05.549 ] 00:16:05.549 }, 00:16:05.549 { 00:16:05.549 "subsystem": "sock", 00:16:05.549 "config": [ 00:16:05.549 { 00:16:05.549 "method": "sock_impl_set_options", 00:16:05.550 "params": { 00:16:05.550 "impl_name": "posix", 00:16:05.550 "recv_buf_size": 2097152, 00:16:05.550 "send_buf_size": 2097152, 00:16:05.550 "enable_recv_pipe": true, 00:16:05.550 "enable_quickack": false, 00:16:05.550 "enable_placement_id": 0, 00:16:05.550 "enable_zerocopy_send_server": true, 00:16:05.550 "enable_zerocopy_send_client": false, 00:16:05.550 "zerocopy_threshold": 0, 00:16:05.550 "tls_version": 0, 00:16:05.550 "enable_ktls": false 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "sock_impl_set_options", 00:16:05.550 "params": { 00:16:05.550 "impl_name": "ssl", 00:16:05.550 "recv_buf_size": 4096, 00:16:05.550 "send_buf_size": 4096, 00:16:05.550 "enable_recv_pipe": true, 00:16:05.550 "enable_quickack": false, 00:16:05.550 "enable_placement_id": 0, 00:16:05.550 "enable_zerocopy_send_server": true, 00:16:05.550 "enable_zerocopy_send_client": false, 00:16:05.550 "zerocopy_threshold": 0, 00:16:05.550 "tls_version": 0, 00:16:05.550 "enable_ktls": false 00:16:05.550 } 00:16:05.550 } 00:16:05.550 ] 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "subsystem": "keyring", 00:16:05.550 "config": [] 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "subsystem": "accel", 00:16:05.550 "config": [ 00:16:05.550 { 00:16:05.550 "method": "accel_set_options", 00:16:05.550 "params": { 00:16:05.550 "small_cache_size": 128, 00:16:05.550 "large_cache_size": 16, 00:16:05.550 "task_count": 2048, 00:16:05.550 "sequence_count": 2048, 00:16:05.550 "buf_count": 2048 00:16:05.550 } 00:16:05.550 } 00:16:05.550 ] 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "subsystem": "bdev", 00:16:05.550 "config": [ 00:16:05.550 { 00:16:05.550 "method": "bdev_set_options", 00:16:05.550 "params": { 00:16:05.550 "bdev_io_pool_size": 65535, 00:16:05.550 "bdev_io_cache_size": 256, 00:16:05.550 "bdev_auto_examine": true, 00:16:05.550 "iobuf_small_cache_size": 128, 00:16:05.550 "iobuf_large_cache_size": 16 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "bdev_raid_set_options", 00:16:05.550 "params": { 00:16:05.550 "process_window_size_kb": 1024 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "bdev_nvme_set_options", 00:16:05.550 "params": { 00:16:05.550 "action_on_timeout": "none", 00:16:05.550 "timeout_us": 0, 00:16:05.550 "timeout_admin_us": 0, 00:16:05.550 "keep_alive_timeout_ms": 10000, 00:16:05.550 "arbitration_burst": 0, 00:16:05.550 "low_priority_weight": 0, 00:16:05.550 "medium_priority_weight": 0, 00:16:05.550 "high_priority_weight": 0, 00:16:05.550 "nvme_adminq_poll_period_us": 10000, 00:16:05.550 "nvme_ioq_poll_period_us": 0, 00:16:05.550 "io_queue_requests": 0, 00:16:05.550 "delay_cmd_submit": true, 00:16:05.550 "transport_retry_count": 4, 00:16:05.550 "bdev_retry_count": 3, 00:16:05.550 "transport_ack_timeout": 0, 00:16:05.550 "ctrlr_loss_timeout_sec": 0, 00:16:05.550 "reconnect_delay_sec": 0, 00:16:05.550 "fast_io_fail_timeout_sec": 0, 00:16:05.550 "disable_auto_failback": false, 00:16:05.550 "generate_uuids": false, 00:16:05.550 "transport_tos": 0, 00:16:05.550 "nvme_error_stat": false, 00:16:05.550 "rdma_srq_size": 0, 00:16:05.550 "io_path_stat": false, 00:16:05.550 "allow_accel_sequence": false, 00:16:05.550 "rdma_max_cq_size": 0, 00:16:05.550 "rdma_cm_event_timeout_ms": 0, 00:16:05.550 "dhchap_digests": [ 00:16:05.550 "sha256", 00:16:05.550 "sha384", 00:16:05.550 "sha512" 00:16:05.550 ], 00:16:05.550 "dhchap_dhgroups": [ 00:16:05.550 "null", 00:16:05.550 "ffdhe2048", 00:16:05.550 "ffdhe3072", 00:16:05.550 "ffdhe4096", 00:16:05.550 "ffdhe6144", 00:16:05.550 "ffdhe8192" 00:16:05.550 ] 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "bdev_nvme_set_hotplug", 00:16:05.550 "params": { 00:16:05.550 "period_us": 100000, 00:16:05.550 "enable": false 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "bdev_wait_for_examine" 00:16:05.550 } 00:16:05.550 ] 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "subsystem": "scsi", 00:16:05.550 "config": null 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "subsystem": "nvmf", 00:16:05.550 "config": [ 00:16:05.550 { 00:16:05.550 "method": "nvmf_set_config", 00:16:05.550 "params": { 00:16:05.550 "discovery_filter": "match_any", 00:16:05.550 "admin_cmd_passthru": { 00:16:05.550 "identify_ctrlr": false 00:16:05.550 } 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "nvmf_set_max_subsystems", 00:16:05.550 "params": { 00:16:05.550 "max_subsystems": 1024 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "nvmf_set_crdt", 00:16:05.550 "params": { 00:16:05.550 "crdt1": 0, 00:16:05.550 "crdt2": 0, 00:16:05.550 "crdt3": 0 00:16:05.550 } 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "method": "nvmf_create_transport", 00:16:05.550 "params": { 00:16:05.550 "trtype": "TCP", 00:16:05.550 "max_queue_depth": 128, 00:16:05.550 "max_io_qpairs_per_ctrlr": 127, 00:16:05.550 "in_capsule_data_size": 4096, 00:16:05.550 "max_io_size": 131072, 00:16:05.550 "io_unit_size": 131072, 00:16:05.550 "max_aq_depth": 128, 00:16:05.550 "num_shared_buffers": 511, 00:16:05.550 "buf_cache_size": 4294967295, 00:16:05.550 "dif_insert_or_strip": false, 00:16:05.550 "zcopy": false, 00:16:05.550 "c2h_success": true, 00:16:05.550 "sock_priority": 0, 00:16:05.550 "abort_timeout_sec": 1, 00:16:05.550 "ack_timeout": 0, 00:16:05.550 "data_wr_pool_size": 0 00:16:05.550 } 00:16:05.550 } 00:16:05.550 ] 00:16:05.550 }, 00:16:05.550 { 00:16:05.550 "subsystem": "iscsi", 00:16:05.550 "config": [ 00:16:05.550 { 00:16:05.550 "method": "iscsi_set_options", 00:16:05.550 "params": { 00:16:05.550 "node_base": "iqn.2016-06.io.spdk", 00:16:05.550 "max_sessions": 128, 00:16:05.550 "max_connections_per_session": 2, 00:16:05.550 "max_queue_depth": 64, 00:16:05.550 "default_time2wait": 2, 00:16:05.550 "default_time2retain": 20, 00:16:05.550 "first_burst_length": 8192, 00:16:05.550 "immediate_data": true, 00:16:05.550 "allow_duplicated_isid": false, 00:16:05.550 "error_recovery_level": 0, 00:16:05.550 "nop_timeout": 60, 00:16:05.550 "nop_in_interval": 30, 00:16:05.550 "disable_chap": false, 00:16:05.550 "require_chap": false, 00:16:05.550 "mutual_chap": false, 00:16:05.550 "chap_group": 0, 00:16:05.550 "max_large_datain_per_connection": 64, 00:16:05.550 "max_r2t_per_connection": 4, 00:16:05.550 "pdu_pool_size": 36864, 00:16:05.550 "immediate_data_pool_size": 16384, 00:16:05.550 "data_out_pool_size": 2048 00:16:05.550 } 00:16:05.550 } 00:16:05.550 ] 00:16:05.550 } 00:16:05.550 ] 00:16:05.550 } 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 46566 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46566 ']' 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46566 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps -c -o command 46566 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # tail -1 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:05.550 killing process with pid 46566 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46566' 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46566 00:16:05.550 07:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46566 00:16:05.809 07:28:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:05.809 07:28:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=46580 00:16:05.809 07:28:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 46580 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 46580 ']' 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 46580 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps -c -o command 46580 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # tail -1 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:11.075 killing process with pid 46580 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46580' 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 46580 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 46580 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:11.075 00:16:11.075 real 0m6.841s 00:16:11.075 user 0m6.342s 00:16:11.075 sys 0m1.066s 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:11.075 ************************************ 00:16:11.075 END TEST skip_rpc_with_json 00:16:11.075 ************************************ 00:16:11.075 07:29:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:16:11.075 07:29:04 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:11.075 07:29:04 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.075 07:29:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.075 ************************************ 00:16:11.075 START TEST skip_rpc_with_delay 00:16:11.075 ************************************ 00:16:11.075 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:11.333 [2024-05-16 07:29:04.651999] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:16:11.333 [2024-05-16 07:29:04.652243] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:11.333 00:16:11.333 real 0m0.010s 00:16:11.333 user 0m0.010s 00:16:11.333 sys 0m0.000s 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.333 ************************************ 00:16:11.333 END TEST skip_rpc_with_delay 00:16:11.333 ************************************ 00:16:11.333 07:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:16:11.333 07:29:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:16:11.333 07:29:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:16:11.333 07:29:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:11.333 00:16:11.333 real 0m12.549s 00:16:11.333 user 0m11.382s 00:16:11.333 sys 0m1.838s 00:16:11.333 07:29:04 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.333 07:29:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.333 ************************************ 00:16:11.333 END TEST skip_rpc 00:16:11.333 ************************************ 00:16:11.333 07:29:04 -- spdk/autotest.sh@167 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:11.333 07:29:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:11.333 07:29:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.333 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.333 ************************************ 00:16:11.333 START TEST rpc_client 00:16:11.333 ************************************ 00:16:11.333 07:29:04 rpc_client -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:16:11.333 * Looking for test storage... 00:16:11.333 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:16:11.333 07:29:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:16:11.333 OK 00:16:11.333 07:29:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:16:11.333 00:16:11.333 real 0m0.148s 00:16:11.333 user 0m0.136s 00:16:11.333 sys 0m0.080s 00:16:11.333 07:29:04 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:11.333 07:29:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:16:11.333 ************************************ 00:16:11.333 END TEST rpc_client 00:16:11.333 ************************************ 00:16:11.590 07:29:04 -- spdk/autotest.sh@168 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:11.590 07:29:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:11.590 07:29:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:11.590 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 ************************************ 00:16:11.590 START TEST json_config 00:16:11.590 ************************************ 00:16:11.590 07:29:04 json_config -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.590 07:29:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:16:11.590 07:29:05 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:16:11.590 07:29:05 json_config -- nvmf/common.sh@7 -- # return 0 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:11.590 INFO: JSON configuration test init 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 07:29:05 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:16:11.590 07:29:05 json_config -- json_config/common.sh@9 -- # local app=target 00:16:11.590 07:29:05 json_config -- json_config/common.sh@10 -- # shift 00:16:11.590 07:29:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:11.590 07:29:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:11.590 07:29:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:16:11.590 07:29:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:11.590 07:29:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:11.590 07:29:05 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:16:11.590 07:29:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46739 00:16:11.590 07:29:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:11.590 Waiting for target to run... 00:16:11.590 07:29:05 json_config -- json_config/common.sh@25 -- # waitforlisten 46739 /var/tmp/spdk_tgt.sock 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@827 -- # '[' -z 46739 ']' 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:11.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:11.590 07:29:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 [2024-05-16 07:29:05.085540] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:11.590 [2024-05-16 07:29:05.085678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:11.879 EAL: TSC is not safe to use in SMP mode 00:16:11.879 EAL: TSC is not invariant 00:16:11.879 [2024-05-16 07:29:05.320394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.879 [2024-05-16 07:29:05.417005] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:11.879 [2024-05-16 07:29:05.419937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.811 07:29:06 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:12.811 07:29:06 json_config -- common/autotest_common.sh@860 -- # return 0 00:16:12.811 00:16:12.811 07:29:06 json_config -- json_config/common.sh@26 -- # echo '' 00:16:12.811 07:29:06 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:16:12.811 07:29:06 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:16:12.811 07:29:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:12.811 07:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:12.811 07:29:06 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:16:12.811 07:29:06 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:16:12.812 07:29:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.812 07:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:12.812 07:29:06 json_config -- json_config/json_config.sh@273 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:16:12.812 07:29:06 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:16:12.812 07:29:06 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:16:13.070 [2024-05-16 07:29:06.553498] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:16:13.070 07:29:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:13.070 07:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:16:13.070 07:29:06 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:16:13.070 07:29:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:16:13.637 07:29:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:16:13.637 07:29:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:16:13.638 07:29:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.638 07:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@55 -- # return 0 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:16:13.638 07:29:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:13.638 07:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:16:13.638 07:29:06 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:16:13.638 07:29:06 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:16:13.638 07:29:07 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:16:13.638 07:29:07 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:13.638 07:29:07 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:13.638 07:29:07 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:16:13.638 07:29:07 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:16:13.638 07:29:07 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:16:13.638 07:29:07 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:16:13.896 Nvme0n1p0 Nvme0n1p1 00:16:13.896 07:29:07 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:16:13.896 07:29:07 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:16:14.155 [2024-05-16 07:29:07.629564] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:16:14.155 [2024-05-16 07:29:07.629623] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:16:14.155 00:16:14.155 07:29:07 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:16:14.155 07:29:07 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:16:14.413 Malloc3 00:16:14.413 07:29:07 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:16:14.413 07:29:07 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:16:14.693 [2024-05-16 07:29:08.205625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:14.693 [2024-05-16 07:29:08.205693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.693 [2024-05-16 07:29:08.205725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bfda180 00:16:14.693 [2024-05-16 07:29:08.205734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.693 [2024-05-16 07:29:08.206281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.693 [2024-05-16 07:29:08.206319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:16:14.693 PTBdevFromMalloc3 00:16:14.693 07:29:08 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:16:14.693 07:29:08 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:16:14.973 Null0 00:16:14.973 07:29:08 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:16:14.973 07:29:08 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:16:15.232 Malloc0 00:16:15.232 07:29:08 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:16:15.232 07:29:08 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:16:15.492 Malloc1 00:16:15.492 07:29:08 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:16:15.492 07:29:08 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:16:16.058 102400+0 records in 00:16:16.058 102400+0 records out 00:16:16.058 104857600 bytes transferred in 0.326650 secs (321008581 bytes/sec) 00:16:16.058 07:29:09 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:16:16.058 07:29:09 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:16:16.058 aio_disk 00:16:16.058 07:29:09 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:16:16.058 07:29:09 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:16:16.058 07:29:09 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:16:16.316 fc590bb9-1355-11ef-8e8f-9dd684e56d79 00:16:16.316 07:29:09 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:16:16.316 07:29:09 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:16:16.316 07:29:09 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:16:16.882 07:29:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:16:16.882 07:29:10 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:16:16.882 07:29:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:16:16.882 07:29:10 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:16:17.140 07:29:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:16:17.140 07:29:10 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:fc859a73-1355-11ef-8e8f-9dd684e56d79 bdev_register:fca5f40e-1355-11ef-8e8f-9dd684e56d79 bdev_register:fccda0b8-1355-11ef-8e8f-9dd684e56d79 bdev_register:fcecc1f6-1355-11ef-8e8f-9dd684e56d79 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:fc859a73-1355-11ef-8e8f-9dd684e56d79 bdev_register:fca5f40e-1355-11ef-8e8f-9dd684e56d79 bdev_register:fccda0b8-1355-11ef-8e8f-9dd684e56d79 bdev_register:fcecc1f6-1355-11ef-8e8f-9dd684e56d79 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@71 -- # sort 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@72 -- # sort 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:16:17.398 07:29:10 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:16:17.398 07:29:10 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:fc859a73-1355-11ef-8e8f-9dd684e56d79 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:fca5f40e-1355-11ef-8e8f-9dd684e56d79 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:fccda0b8-1355-11ef-8e8f-9dd684e56d79 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.657 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:fcecc1f6-1355-11ef-8e8f-9dd684e56d79 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:fc859a73-1355-11ef-8e8f-9dd684e56d79 bdev_register:fca5f40e-1355-11ef-8e8f-9dd684e56d79 bdev_register:fccda0b8-1355-11ef-8e8f-9dd684e56d79 bdev_register:fcecc1f6-1355-11ef-8e8f-9dd684e56d79 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\c\8\5\9\a\7\3\-\1\3\5\5\-\1\1\e\f\-\8\e\8\f\-\9\d\d\6\8\4\e\5\6\d\7\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\c\a\5\f\4\0\e\-\1\3\5\5\-\1\1\e\f\-\8\e\8\f\-\9\d\d\6\8\4\e\5\6\d\7\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\c\c\d\a\0\b\8\-\1\3\5\5\-\1\1\e\f\-\8\e\8\f\-\9\d\d\6\8\4\e\5\6\d\7\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\c\e\c\c\1\f\6\-\1\3\5\5\-\1\1\e\f\-\8\e\8\f\-\9\d\d\6\8\4\e\5\6\d\7\9 ]] 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@86 -- # cat 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:fc859a73-1355-11ef-8e8f-9dd684e56d79 bdev_register:fca5f40e-1355-11ef-8e8f-9dd684e56d79 bdev_register:fccda0b8-1355-11ef-8e8f-9dd684e56d79 bdev_register:fcecc1f6-1355-11ef-8e8f-9dd684e56d79 00:16:17.658 Expected events matched: 00:16:17.658 bdev_register:Malloc0 00:16:17.658 bdev_register:Malloc0p0 00:16:17.658 bdev_register:Malloc0p1 00:16:17.658 bdev_register:Malloc0p2 00:16:17.658 bdev_register:Malloc1 00:16:17.658 bdev_register:Malloc3 00:16:17.658 bdev_register:Null0 00:16:17.658 bdev_register:Nvme0n1 00:16:17.658 bdev_register:Nvme0n1p0 00:16:17.658 bdev_register:Nvme0n1p1 00:16:17.658 bdev_register:PTBdevFromMalloc3 00:16:17.658 bdev_register:aio_disk 00:16:17.658 bdev_register:fc859a73-1355-11ef-8e8f-9dd684e56d79 00:16:17.658 bdev_register:fca5f40e-1355-11ef-8e8f-9dd684e56d79 00:16:17.658 bdev_register:fccda0b8-1355-11ef-8e8f-9dd684e56d79 00:16:17.658 bdev_register:fcecc1f6-1355-11ef-8e8f-9dd684e56d79 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:16:17.658 07:29:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.658 07:29:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:16:17.658 07:29:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.658 07:29:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:16:17.658 07:29:11 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:17.658 07:29:11 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:16:17.934 MallocBdevForConfigChangeCheck 00:16:17.934 07:29:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:16:17.934 07:29:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.934 07:29:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.193 07:29:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:16:18.193 07:29:11 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:18.454 INFO: shutting down applications... 00:16:18.454 07:29:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:16:18.454 07:29:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:16:18.454 07:29:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:16:18.454 07:29:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:16:18.454 07:29:11 json_config -- json_config/json_config.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:16:18.454 [2024-05-16 07:29:11.981809] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:16:18.712 Calling clear_iscsi_subsystem 00:16:18.712 Calling clear_nvmf_subsystem 00:16:18.712 Calling clear_bdev_subsystem 00:16:18.712 07:29:12 json_config -- json_config/json_config.sh@337 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:16:18.712 07:29:12 json_config -- json_config/json_config.sh@343 -- # count=100 00:16:18.712 07:29:12 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:16:18.712 07:29:12 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:18.712 07:29:12 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:16:18.712 07:29:12 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:16:18.970 07:29:12 json_config -- json_config/json_config.sh@345 -- # break 00:16:18.970 07:29:12 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:16:18.970 07:29:12 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:16:18.970 07:29:12 json_config -- json_config/common.sh@31 -- # local app=target 00:16:18.970 07:29:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:18.970 07:29:12 json_config -- json_config/common.sh@35 -- # [[ -n 46739 ]] 00:16:18.970 07:29:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 46739 00:16:18.970 07:29:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:18.970 07:29:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:18.970 07:29:12 json_config -- json_config/common.sh@41 -- # kill -0 46739 00:16:18.970 07:29:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:16:19.535 07:29:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:16:19.535 07:29:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:19.535 07:29:13 json_config -- json_config/common.sh@41 -- # kill -0 46739 00:16:19.535 07:29:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:19.536 07:29:13 json_config -- json_config/common.sh@43 -- # break 00:16:19.536 07:29:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:19.536 SPDK target shutdown done 00:16:19.536 07:29:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:19.536 INFO: relaunching applications... 00:16:19.536 07:29:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:16:19.536 07:29:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:19.536 07:29:13 json_config -- json_config/common.sh@9 -- # local app=target 00:16:19.536 07:29:13 json_config -- json_config/common.sh@10 -- # shift 00:16:19.536 07:29:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:19.536 07:29:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:19.536 07:29:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:16:19.536 07:29:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:19.536 07:29:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:19.536 07:29:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46925 00:16:19.536 07:29:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:19.536 Waiting for target to run... 00:16:19.536 07:29:13 json_config -- json_config/common.sh@25 -- # waitforlisten 46925 /var/tmp/spdk_tgt.sock 00:16:19.536 07:29:13 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:19.536 07:29:13 json_config -- common/autotest_common.sh@827 -- # '[' -z 46925 ']' 00:16:19.536 07:29:13 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:19.536 07:29:13 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:19.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:19.536 07:29:13 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:19.536 07:29:13 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:19.536 07:29:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.536 [2024-05-16 07:29:13.034578] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:19.536 [2024-05-16 07:29:13.034799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:19.795 EAL: TSC is not safe to use in SMP mode 00:16:19.795 EAL: TSC is not invariant 00:16:19.795 [2024-05-16 07:29:13.270161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.795 [2024-05-16 07:29:13.354881] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:19.795 [2024-05-16 07:29:13.357181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.053 [2024-05-16 07:29:13.487912] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:16:20.053 [2024-05-16 07:29:13.487975] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:16:20.053 [2024-05-16 07:29:13.495908] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:16:20.053 [2024-05-16 07:29:13.495958] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:16:20.054 [2024-05-16 07:29:13.503926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:20.054 [2024-05-16 07:29:13.503973] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:16:20.054 [2024-05-16 07:29:13.503985] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:16:20.054 [2024-05-16 07:29:13.511918] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:20.054 [2024-05-16 07:29:13.584228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:16:20.054 [2024-05-16 07:29:13.584288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.054 [2024-05-16 07:29:13.584352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b523780 00:16:20.054 [2024-05-16 07:29:13.584361] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.054 [2024-05-16 07:29:13.584427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.054 [2024-05-16 07:29:13.584437] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:16:20.623 07:29:14 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:20.623 07:29:14 json_config -- common/autotest_common.sh@860 -- # return 0 00:16:20.623 00:16:20.623 07:29:14 json_config -- json_config/common.sh@26 -- # echo '' 00:16:20.623 07:29:14 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:16:20.623 INFO: Checking if target configuration is the same... 00:16:20.623 07:29:14 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:16:20.623 07:29:14 json_config -- json_config/json_config.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.wPOi0I /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:20.623 + '[' 2 -ne 2 ']' 00:16:20.623 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:16:20.623 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:16:20.623 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:16:20.623 +++ basename /tmp//sh-np.wPOi0I 00:16:20.623 ++ mktemp /tmp/sh-np.wPOi0I.XXX 00:16:20.623 + tmp_file_1=/tmp/sh-np.wPOi0I.XpP 00:16:20.623 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:20.623 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:20.623 + tmp_file_2=/tmp/spdk_tgt_config.json.ghg 00:16:20.623 + ret=0 00:16:20.623 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:20.623 07:29:14 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:16:20.623 07:29:14 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:20.881 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:21.139 + diff -u /tmp/sh-np.wPOi0I.XpP /tmp/spdk_tgt_config.json.ghg 00:16:21.139 + echo 'INFO: JSON config files are the same' 00:16:21.139 INFO: JSON config files are the same 00:16:21.139 + rm /tmp/sh-np.wPOi0I.XpP /tmp/spdk_tgt_config.json.ghg 00:16:21.139 + exit 0 00:16:21.139 07:29:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:16:21.139 INFO: changing configuration and checking if this can be detected... 00:16:21.139 07:29:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:16:21.139 07:29:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:21.139 07:29:14 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:16:21.397 07:29:14 json_config -- json_config/json_config.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.ndGsqJ /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:21.397 + '[' 2 -ne 2 ']' 00:16:21.397 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:16:21.397 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:16:21.397 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:16:21.397 +++ basename /tmp//sh-np.ndGsqJ 00:16:21.397 ++ mktemp /tmp/sh-np.ndGsqJ.XXX 00:16:21.397 + tmp_file_1=/tmp/sh-np.ndGsqJ.anH 00:16:21.397 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:21.397 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:16:21.397 + tmp_file_2=/tmp/spdk_tgt_config.json.hte 00:16:21.397 + ret=0 00:16:21.397 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:21.397 07:29:14 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:16:21.397 07:29:14 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:16:21.655 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:16:21.655 + diff -u /tmp/sh-np.ndGsqJ.anH /tmp/spdk_tgt_config.json.hte 00:16:21.655 + ret=1 00:16:21.655 + echo '=== Start of file: /tmp/sh-np.ndGsqJ.anH ===' 00:16:21.655 + cat /tmp/sh-np.ndGsqJ.anH 00:16:21.655 + echo '=== End of file: /tmp/sh-np.ndGsqJ.anH ===' 00:16:21.655 + echo '' 00:16:21.655 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hte ===' 00:16:21.655 + cat /tmp/spdk_tgt_config.json.hte 00:16:21.655 + echo '=== End of file: /tmp/spdk_tgt_config.json.hte ===' 00:16:21.655 + echo '' 00:16:21.655 + rm /tmp/sh-np.ndGsqJ.anH /tmp/spdk_tgt_config.json.hte 00:16:21.655 + exit 1 00:16:21.655 INFO: configuration change detected. 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:16:21.655 07:29:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.655 07:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@317 -- # [[ -n 46925 ]] 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:16:21.655 07:29:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.655 07:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:16:21.655 07:29:15 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:16:21.655 07:29:15 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:16:22.220 07:29:15 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:16:22.220 07:29:15 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:16:22.220 07:29:15 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:16:22.220 07:29:15 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:16:22.478 07:29:16 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:16:22.478 07:29:16 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:16:22.736 07:29:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:16:22.736 07:29:16 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:16:22.736 07:29:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:16:22.736 07:29:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.736 07:29:16 json_config -- json_config/json_config.sh@323 -- # killprocess 46925 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@946 -- # '[' -z 46925 ']' 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@950 -- # kill -0 46925 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@951 -- # uname 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@954 -- # ps -c -o command 46925 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@954 -- # tail -1 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:22.736 killing process with pid 46925 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 46925' 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@965 -- # kill 46925 00:16:22.736 07:29:16 json_config -- common/autotest_common.sh@970 -- # wait 46925 00:16:22.994 07:29:16 json_config -- json_config/json_config.sh@326 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:16:22.994 07:29:16 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:16:22.994 07:29:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.994 07:29:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 07:29:16 json_config -- json_config/json_config.sh@328 -- # return 0 00:16:23.253 INFO: Success 00:16:23.253 07:29:16 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:16:23.253 00:16:23.253 real 0m11.661s 00:16:23.253 user 0m18.661s 00:16:23.253 sys 0m1.910s 00:16:23.253 07:29:16 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.253 ************************************ 00:16:23.253 END TEST json_config 00:16:23.253 ************************************ 00:16:23.253 07:29:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 07:29:16 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:23.253 07:29:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:23.253 07:29:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.253 07:29:16 -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 ************************************ 00:16:23.253 START TEST json_config_extra_key 00:16:23.253 ************************************ 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.253 07:29:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:16:23.253 07:29:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:16:23.253 07:29:16 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:16:23.253 INFO: launching applications... 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:16:23.253 07:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=47058 00:16:23.253 Waiting for target to run... 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 47058 /var/tmp/spdk_tgt.sock 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 47058 ']' 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:16:23.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:16:23.253 07:29:16 json_config_extra_key -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:23.253 07:29:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:23.253 [2024-05-16 07:29:16.780411] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:23.253 [2024-05-16 07:29:16.780616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:23.511 EAL: TSC is not safe to use in SMP mode 00:16:23.511 EAL: TSC is not invariant 00:16:23.511 [2024-05-16 07:29:17.037152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.769 [2024-05-16 07:29:17.119223] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:23.769 [2024-05-16 07:29:17.121521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.341 07:29:17 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:24.341 07:29:17 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:16:24.341 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:16:24.341 INFO: shutting down applications... 00:16:24.341 07:29:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:16:24.341 07:29:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 47058 ]] 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 47058 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47058 00:16:24.341 07:29:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 47058 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:16:24.911 SPDK target shutdown done 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:24.911 07:29:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:24.911 Success 00:16:24.911 07:29:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:16:24.911 00:16:24.911 real 0m1.769s 00:16:24.911 user 0m1.679s 00:16:24.912 sys 0m0.408s 00:16:24.912 07:29:18 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:24.912 07:29:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 ************************************ 00:16:24.912 END TEST json_config_extra_key 00:16:24.912 ************************************ 00:16:24.912 07:29:18 -- spdk/autotest.sh@170 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:24.912 07:29:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:24.912 07:29:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:24.912 07:29:18 -- common/autotest_common.sh@10 -- # set +x 00:16:24.912 ************************************ 00:16:24.912 START TEST alias_rpc 00:16:24.912 ************************************ 00:16:24.912 07:29:18 alias_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:25.169 * Looking for test storage... 00:16:25.169 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:16:25.169 07:29:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:25.169 07:29:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.169 07:29:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=47116 00:16:25.169 07:29:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 47116 00:16:25.169 07:29:18 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 47116 ']' 00:16:25.169 07:29:18 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.169 07:29:18 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.169 07:29:18 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.169 07:29:18 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.169 07:29:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.169 [2024-05-16 07:29:18.604910] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:25.169 [2024-05-16 07:29:18.605081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:25.733 EAL: TSC is not safe to use in SMP mode 00:16:25.733 EAL: TSC is not invariant 00:16:25.733 [2024-05-16 07:29:19.100431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.733 [2024-05-16 07:29:19.196851] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:25.733 [2024-05-16 07:29:19.199506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.300 07:29:19 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.300 07:29:19 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:16:26.300 07:29:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:16:26.558 07:29:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 47116 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 47116 ']' 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 47116 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 47116 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@954 -- # tail -1 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:26.558 killing process with pid 47116 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47116' 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@965 -- # kill 47116 00:16:26.558 07:29:20 alias_rpc -- common/autotest_common.sh@970 -- # wait 47116 00:16:26.814 00:16:26.815 real 0m1.908s 00:16:26.815 user 0m2.175s 00:16:26.815 sys 0m0.742s 00:16:26.815 07:29:20 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:26.815 07:29:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.815 ************************************ 00:16:26.815 END TEST alias_rpc 00:16:26.815 ************************************ 00:16:26.815 07:29:20 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:16:26.815 07:29:20 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:26.815 07:29:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:26.815 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.815 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:16:26.815 ************************************ 00:16:26.815 START TEST spdkcli_tcp 00:16:26.815 ************************************ 00:16:26.815 07:29:20 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:27.073 * Looking for test storage... 00:16:27.073 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=47181 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 47181 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 47181 ']' 00:16:27.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:27.073 07:29:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.073 07:29:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:27.074 [2024-05-16 07:29:20.576670] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:27.074 [2024-05-16 07:29:20.576941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:27.639 EAL: TSC is not safe to use in SMP mode 00:16:27.639 EAL: TSC is not invariant 00:16:27.639 [2024-05-16 07:29:21.061104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.639 [2024-05-16 07:29:21.160071] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:27.639 [2024-05-16 07:29:21.160170] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:27.639 [2024-05-16 07:29:21.163857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.639 [2024-05-16 07:29:21.163853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.204 07:29:21 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:28.204 07:29:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:16:28.204 07:29:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=47189 00:16:28.204 07:29:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:28.204 07:29:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:28.469 [ 00:16:28.469 "spdk_get_version", 00:16:28.469 "rpc_get_methods", 00:16:28.469 "env_dpdk_get_mem_stats", 00:16:28.469 "trace_get_info", 00:16:28.469 "trace_get_tpoint_group_mask", 00:16:28.470 "trace_disable_tpoint_group", 00:16:28.470 "trace_enable_tpoint_group", 00:16:28.470 "trace_clear_tpoint_mask", 00:16:28.470 "trace_set_tpoint_mask", 00:16:28.470 "notify_get_notifications", 00:16:28.470 "notify_get_types", 00:16:28.470 "accel_get_stats", 00:16:28.470 "accel_set_options", 00:16:28.470 "accel_set_driver", 00:16:28.470 "accel_crypto_key_destroy", 00:16:28.470 "accel_crypto_keys_get", 00:16:28.470 "accel_crypto_key_create", 00:16:28.470 "accel_assign_opc", 00:16:28.470 "accel_get_module_info", 00:16:28.470 "accel_get_opc_assignments", 00:16:28.470 "bdev_get_histogram", 00:16:28.470 "bdev_enable_histogram", 00:16:28.470 "bdev_set_qos_limit", 00:16:28.470 "bdev_set_qd_sampling_period", 00:16:28.470 "bdev_get_bdevs", 00:16:28.470 "bdev_reset_iostat", 00:16:28.470 "bdev_get_iostat", 00:16:28.470 "bdev_examine", 00:16:28.470 "bdev_wait_for_examine", 00:16:28.470 "bdev_set_options", 00:16:28.470 "keyring_get_keys", 00:16:28.470 "framework_get_pci_devices", 00:16:28.470 "framework_get_config", 00:16:28.470 "framework_get_subsystems", 00:16:28.470 "sock_get_default_impl", 00:16:28.470 "sock_set_default_impl", 00:16:28.470 "sock_impl_set_options", 00:16:28.470 "sock_impl_get_options", 00:16:28.470 "thread_set_cpumask", 00:16:28.470 "framework_get_scheduler", 00:16:28.470 "framework_set_scheduler", 00:16:28.470 "framework_get_reactors", 00:16:28.470 "thread_get_io_channels", 00:16:28.470 "thread_get_pollers", 00:16:28.470 "thread_get_stats", 00:16:28.470 "framework_monitor_context_switch", 00:16:28.470 "spdk_kill_instance", 00:16:28.470 "log_enable_timestamps", 00:16:28.470 "log_get_flags", 00:16:28.470 "log_clear_flag", 00:16:28.470 "log_set_flag", 00:16:28.470 "log_get_level", 00:16:28.470 "log_set_level", 00:16:28.470 "log_get_print_level", 00:16:28.470 "log_set_print_level", 00:16:28.470 "framework_enable_cpumask_locks", 00:16:28.470 "framework_disable_cpumask_locks", 00:16:28.470 "framework_wait_init", 00:16:28.470 "framework_start_init", 00:16:28.470 "iobuf_get_stats", 00:16:28.470 "iobuf_set_options", 00:16:28.470 "vmd_rescan", 00:16:28.470 "vmd_remove_device", 00:16:28.470 "vmd_enable", 00:16:28.470 "nvmf_stop_mdns_prr", 00:16:28.470 "nvmf_publish_mdns_prr", 00:16:28.470 "nvmf_subsystem_get_listeners", 00:16:28.470 "nvmf_subsystem_get_qpairs", 00:16:28.470 "nvmf_subsystem_get_controllers", 00:16:28.470 "nvmf_get_stats", 00:16:28.470 "nvmf_get_transports", 00:16:28.470 "nvmf_create_transport", 00:16:28.470 "nvmf_get_targets", 00:16:28.470 "nvmf_delete_target", 00:16:28.470 "nvmf_create_target", 00:16:28.470 "nvmf_subsystem_allow_any_host", 00:16:28.470 "nvmf_subsystem_remove_host", 00:16:28.470 "nvmf_subsystem_add_host", 00:16:28.470 "nvmf_ns_remove_host", 00:16:28.470 "nvmf_ns_add_host", 00:16:28.470 "nvmf_subsystem_remove_ns", 00:16:28.470 "nvmf_subsystem_add_ns", 00:16:28.470 "nvmf_subsystem_listener_set_ana_state", 00:16:28.470 "nvmf_discovery_get_referrals", 00:16:28.470 "nvmf_discovery_remove_referral", 00:16:28.470 "nvmf_discovery_add_referral", 00:16:28.470 "nvmf_subsystem_remove_listener", 00:16:28.470 "nvmf_subsystem_add_listener", 00:16:28.470 "nvmf_delete_subsystem", 00:16:28.470 "nvmf_create_subsystem", 00:16:28.470 "nvmf_get_subsystems", 00:16:28.470 "nvmf_set_crdt", 00:16:28.470 "nvmf_set_config", 00:16:28.470 "nvmf_set_max_subsystems", 00:16:28.470 "scsi_get_devices", 00:16:28.470 "iscsi_get_histogram", 00:16:28.470 "iscsi_enable_histogram", 00:16:28.470 "iscsi_set_options", 00:16:28.470 "iscsi_get_auth_groups", 00:16:28.470 "iscsi_auth_group_remove_secret", 00:16:28.470 "iscsi_auth_group_add_secret", 00:16:28.470 "iscsi_delete_auth_group", 00:16:28.470 "iscsi_create_auth_group", 00:16:28.470 "iscsi_set_discovery_auth", 00:16:28.470 "iscsi_get_options", 00:16:28.470 "iscsi_target_node_request_logout", 00:16:28.470 "iscsi_target_node_set_redirect", 00:16:28.470 "iscsi_target_node_set_auth", 00:16:28.470 "iscsi_target_node_add_lun", 00:16:28.470 "iscsi_get_stats", 00:16:28.470 "iscsi_get_connections", 00:16:28.470 "iscsi_portal_group_set_auth", 00:16:28.470 "iscsi_start_portal_group", 00:16:28.470 "iscsi_delete_portal_group", 00:16:28.470 "iscsi_create_portal_group", 00:16:28.470 "iscsi_get_portal_groups", 00:16:28.470 "iscsi_delete_target_node", 00:16:28.470 "iscsi_target_node_remove_pg_ig_maps", 00:16:28.470 "iscsi_target_node_add_pg_ig_maps", 00:16:28.470 "iscsi_create_target_node", 00:16:28.470 "iscsi_get_target_nodes", 00:16:28.470 "iscsi_delete_initiator_group", 00:16:28.470 "iscsi_initiator_group_remove_initiators", 00:16:28.470 "iscsi_initiator_group_add_initiators", 00:16:28.470 "iscsi_create_initiator_group", 00:16:28.470 "iscsi_get_initiator_groups", 00:16:28.470 "keyring_file_remove_key", 00:16:28.470 "keyring_file_add_key", 00:16:28.470 "iaa_scan_accel_module", 00:16:28.470 "dsa_scan_accel_module", 00:16:28.470 "ioat_scan_accel_module", 00:16:28.470 "accel_error_inject_error", 00:16:28.470 "bdev_aio_delete", 00:16:28.470 "bdev_aio_rescan", 00:16:28.470 "bdev_aio_create", 00:16:28.470 "blobfs_create", 00:16:28.470 "blobfs_detect", 00:16:28.470 "blobfs_set_cache_size", 00:16:28.470 "bdev_zone_block_delete", 00:16:28.470 "bdev_zone_block_create", 00:16:28.470 "bdev_delay_delete", 00:16:28.470 "bdev_delay_create", 00:16:28.470 "bdev_delay_update_latency", 00:16:28.470 "bdev_split_delete", 00:16:28.470 "bdev_split_create", 00:16:28.470 "bdev_error_inject_error", 00:16:28.470 "bdev_error_delete", 00:16:28.470 "bdev_error_create", 00:16:28.470 "bdev_raid_set_options", 00:16:28.470 "bdev_raid_remove_base_bdev", 00:16:28.470 "bdev_raid_add_base_bdev", 00:16:28.470 "bdev_raid_delete", 00:16:28.470 "bdev_raid_create", 00:16:28.470 "bdev_raid_get_bdevs", 00:16:28.470 "bdev_lvol_set_parent_bdev", 00:16:28.470 "bdev_lvol_set_parent", 00:16:28.470 "bdev_lvol_check_shallow_copy", 00:16:28.470 "bdev_lvol_start_shallow_copy", 00:16:28.470 "bdev_lvol_grow_lvstore", 00:16:28.470 "bdev_lvol_get_lvols", 00:16:28.470 "bdev_lvol_get_lvstores", 00:16:28.470 "bdev_lvol_delete", 00:16:28.470 "bdev_lvol_set_read_only", 00:16:28.470 "bdev_lvol_resize", 00:16:28.470 "bdev_lvol_decouple_parent", 00:16:28.470 "bdev_lvol_inflate", 00:16:28.470 "bdev_lvol_rename", 00:16:28.470 "bdev_lvol_clone_bdev", 00:16:28.470 "bdev_lvol_clone", 00:16:28.470 "bdev_lvol_snapshot", 00:16:28.470 "bdev_lvol_create", 00:16:28.470 "bdev_lvol_delete_lvstore", 00:16:28.470 "bdev_lvol_rename_lvstore", 00:16:28.470 "bdev_lvol_create_lvstore", 00:16:28.470 "bdev_passthru_delete", 00:16:28.470 "bdev_passthru_create", 00:16:28.470 "bdev_nvme_send_cmd", 00:16:28.470 "bdev_nvme_get_path_iostat", 00:16:28.470 "bdev_nvme_get_mdns_discovery_info", 00:16:28.470 "bdev_nvme_stop_mdns_discovery", 00:16:28.470 "bdev_nvme_start_mdns_discovery", 00:16:28.470 "bdev_nvme_set_multipath_policy", 00:16:28.470 "bdev_nvme_set_preferred_path", 00:16:28.470 "bdev_nvme_get_io_paths", 00:16:28.470 "bdev_nvme_remove_error_injection", 00:16:28.470 "bdev_nvme_add_error_injection", 00:16:28.470 "bdev_nvme_get_discovery_info", 00:16:28.470 "bdev_nvme_stop_discovery", 00:16:28.470 "bdev_nvme_start_discovery", 00:16:28.470 "bdev_nvme_get_controller_health_info", 00:16:28.471 "bdev_nvme_disable_controller", 00:16:28.471 "bdev_nvme_enable_controller", 00:16:28.471 "bdev_nvme_reset_controller", 00:16:28.471 "bdev_nvme_get_transport_statistics", 00:16:28.471 "bdev_nvme_apply_firmware", 00:16:28.471 "bdev_nvme_detach_controller", 00:16:28.471 "bdev_nvme_get_controllers", 00:16:28.471 "bdev_nvme_attach_controller", 00:16:28.471 "bdev_nvme_set_hotplug", 00:16:28.471 "bdev_nvme_set_options", 00:16:28.471 "bdev_null_resize", 00:16:28.471 "bdev_null_delete", 00:16:28.471 "bdev_null_create", 00:16:28.471 "bdev_malloc_delete", 00:16:28.471 "bdev_malloc_create" 00:16:28.471 ] 00:16:28.471 07:29:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:28.471 07:29:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.471 07:29:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.471 07:29:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:28.471 07:29:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 47181 00:16:28.471 07:29:22 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 47181 ']' 00:16:28.471 07:29:22 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 47181 00:16:28.471 07:29:22 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:16:28.471 07:29:22 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps -c -o command 47181 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # tail -1 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:28.733 killing process with pid 47181 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47181' 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 47181 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 47181 00:16:28.733 00:16:28.733 real 0m1.904s 00:16:28.733 user 0m3.122s 00:16:28.733 sys 0m0.793s 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:28.733 07:29:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.733 ************************************ 00:16:28.733 END TEST spdkcli_tcp 00:16:28.733 ************************************ 00:16:28.989 07:29:22 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:28.989 07:29:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:28.989 07:29:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:28.989 07:29:22 -- common/autotest_common.sh@10 -- # set +x 00:16:28.989 ************************************ 00:16:28.989 START TEST dpdk_mem_utility 00:16:28.989 ************************************ 00:16:28.989 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:28.989 * Looking for test storage... 00:16:28.989 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:28.989 07:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:28.989 07:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=47260 00:16:28.989 07:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 47260 00:16:28.989 07:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:28.989 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 47260 ']' 00:16:28.989 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.989 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:28.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.990 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.990 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:28.990 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:28.990 [2024-05-16 07:29:22.508685] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:28.990 [2024-05-16 07:29:22.508955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:29.554 EAL: TSC is not safe to use in SMP mode 00:16:29.554 EAL: TSC is not invariant 00:16:29.554 [2024-05-16 07:29:22.960923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.554 [2024-05-16 07:29:23.055075] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:29.554 [2024-05-16 07:29:23.057780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.121 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:30.121 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:16:30.121 07:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:30.121 07:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:30.121 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.121 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:30.121 { 00:16:30.121 "filename": "/tmp/spdk_mem_dump.txt" 00:16:30.121 } 00:16:30.121 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.121 07:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:30.379 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:16:30.379 1 heaps totaling size 2048.000000 MiB 00:16:30.379 size: 2048.000000 MiB heap id: 0 00:16:30.379 end heaps---------- 00:16:30.379 8 mempools totaling size 592.563660 MiB 00:16:30.379 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:16:30.379 size: 153.489014 MiB name: PDU_data_out_Pool 00:16:30.379 size: 84.500549 MiB name: bdev_io_47260 00:16:30.379 size: 51.008362 MiB name: evtpool_47260 00:16:30.379 size: 50.000549 MiB name: msgpool_47260 00:16:30.379 size: 21.758911 MiB name: PDU_Pool 00:16:30.379 size: 19.508911 MiB name: SCSI_TASK_Pool 00:16:30.379 size: 0.026123 MiB name: Session_Pool 00:16:30.379 end mempools------- 00:16:30.379 6 memzones totaling size 4.142822 MiB 00:16:30.379 size: 1.000366 MiB name: RG_ring_0_47260 00:16:30.379 size: 1.000366 MiB name: RG_ring_1_47260 00:16:30.379 size: 1.000366 MiB name: RG_ring_4_47260 00:16:30.379 size: 1.000366 MiB name: RG_ring_5_47260 00:16:30.379 size: 0.125366 MiB name: RG_ring_2_47260 00:16:30.379 size: 0.015991 MiB name: RG_ring_3_47260 00:16:30.379 end memzones------- 00:16:30.379 07:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:30.379 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:16:30.379 list of free elements. size: 1254.071899 MiB 00:16:30.379 element at address: 0x1060000000 with size: 1254.001099 MiB 00:16:30.379 element at address: 0x10c8000000 with size: 0.070129 MiB 00:16:30.379 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:16:30.379 list of standard malloc elements. size: 197.217957 MiB 00:16:30.379 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:16:30.379 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:16:30.379 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:16:30.379 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:16:30.379 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:16:30.379 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:16:30.380 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:16:30.380 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:16:30.380 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:16:30.380 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:16:30.380 list of memzone associated elements. size: 596.710144 MiB 00:16:30.380 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:16:30.380 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:16:30.380 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:16:30.380 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:16:30.380 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:16:30.380 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_47260_0 00:16:30.380 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:16:30.380 associated memzone info: size: 48.000000 MiB name: MP_evtpool_47260_0 00:16:30.380 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:16:30.380 associated memzone info: size: 48.000000 MiB name: MP_msgpool_47260_0 00:16:30.380 element at address: 0x10c683d780 with size: 20.250671 MiB 00:16:30.380 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:16:30.380 element at address: 0x10ae700680 with size: 18.000671 MiB 00:16:30.380 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:16:30.380 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:16:30.380 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_47260 00:16:30.380 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:16:30.380 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_47260 00:16:30.380 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:16:30.380 associated memzone info: size: 1.007996 MiB name: MP_evtpool_47260 00:16:30.380 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:16:30.380 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:30.380 element at address: 0x10c673b640 with size: 1.008118 MiB 00:16:30.380 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:30.380 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:16:30.380 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:30.380 element at address: 0x10af980b40 with size: 1.008118 MiB 00:16:30.380 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:30.380 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:16:30.380 associated memzone info: size: 1.000366 MiB name: RG_ring_0_47260 00:16:30.380 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:16:30.380 associated memzone info: size: 1.000366 MiB name: RG_ring_1_47260 00:16:30.380 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:16:30.380 associated memzone info: size: 1.000366 MiB name: RG_ring_4_47260 00:16:30.380 element at address: 0x10ae600480 with size: 1.000488 MiB 00:16:30.380 associated memzone info: size: 1.000366 MiB name: RG_ring_5_47260 00:16:30.380 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:16:30.380 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_47260 00:16:30.380 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:16:30.380 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:30.380 element at address: 0x10af900940 with size: 0.500488 MiB 00:16:30.380 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:30.380 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:16:30.380 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:30.380 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:16:30.380 associated memzone info: size: 0.125366 MiB name: RG_ring_2_47260 00:16:30.380 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:16:30.380 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:30.380 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:16:30.380 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:30.380 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:16:30.380 associated memzone info: size: 0.015991 MiB name: RG_ring_3_47260 00:16:30.380 element at address: 0x10c8018080 with size: 0.002441 MiB 00:16:30.380 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:30.380 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:16:30.380 associated memzone info: size: 0.000183 MiB name: MP_msgpool_47260 00:16:30.380 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:16:30.380 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_47260 00:16:30.380 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:16:30.380 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:30.380 07:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:30.380 07:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 47260 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 47260 ']' 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 47260 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps -c -o command 47260 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # tail -1 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:30.380 killing process with pid 47260 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47260' 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 47260 00:16:30.380 07:29:23 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 47260 00:16:30.639 00:16:30.639 real 0m1.734s 00:16:30.639 user 0m1.914s 00:16:30.639 sys 0m0.662s 00:16:30.639 ************************************ 00:16:30.639 END TEST dpdk_mem_utility 00:16:30.639 ************************************ 00:16:30.639 07:29:24 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:30.639 07:29:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:16:30.639 07:29:24 -- spdk/autotest.sh@177 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:30.639 07:29:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:30.639 07:29:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:30.639 07:29:24 -- common/autotest_common.sh@10 -- # set +x 00:16:30.639 ************************************ 00:16:30.639 START TEST event 00:16:30.639 ************************************ 00:16:30.639 07:29:24 event -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:30.898 * Looking for test storage... 00:16:30.898 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:16:30.898 07:29:24 event -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:30.898 07:29:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:16:30.898 07:29:24 event -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:30.898 07:29:24 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:16:30.898 07:29:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:30.898 07:29:24 event -- common/autotest_common.sh@10 -- # set +x 00:16:30.898 ************************************ 00:16:30.898 START TEST event_perf 00:16:30.898 ************************************ 00:16:30.898 07:29:24 event.event_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:30.898 Running I/O for 1 seconds...[2024-05-16 07:29:24.296779] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:30.898 [2024-05-16 07:29:24.296988] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:31.192 EAL: TSC is not safe to use in SMP mode 00:16:31.192 EAL: TSC is not invariant 00:16:31.450 [2024-05-16 07:29:24.764236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.450 [2024-05-16 07:29:24.843300] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:31.450 [2024-05-16 07:29:24.843355] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:31.450 [2024-05-16 07:29:24.843363] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:16:31.450 [2024-05-16 07:29:24.843371] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:16:31.450 [2024-05-16 07:29:24.847041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.450 [2024-05-16 07:29:24.847310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.450 Running I/O for 1 seconds...[2024-05-16 07:29:24.847189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.450 [2024-05-16 07:29:24.847304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.385 00:16:32.385 lcore 0: 2296376 00:16:32.385 lcore 1: 2296378 00:16:32.385 lcore 2: 2296376 00:16:32.385 lcore 3: 2296376 00:16:32.644 done. 00:16:32.644 00:16:32.644 real 0m1.676s 00:16:32.644 user 0m4.170s 00:16:32.644 sys 0m0.502s 00:16:32.644 07:29:25 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:32.644 07:29:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:16:32.644 ************************************ 00:16:32.644 END TEST event_perf 00:16:32.644 ************************************ 00:16:32.644 07:29:25 event -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:32.644 07:29:26 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:32.644 07:29:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:32.644 07:29:26 event -- common/autotest_common.sh@10 -- # set +x 00:16:32.644 ************************************ 00:16:32.644 START TEST event_reactor 00:16:32.644 ************************************ 00:16:32.644 07:29:26 event.event_reactor -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:32.644 [2024-05-16 07:29:26.019784] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:32.644 [2024-05-16 07:29:26.020120] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:32.902 EAL: TSC is not safe to use in SMP mode 00:16:32.902 EAL: TSC is not invariant 00:16:32.902 [2024-05-16 07:29:26.465665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.160 [2024-05-16 07:29:26.546224] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:33.161 [2024-05-16 07:29:26.548643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.534 test_start 00:16:34.534 oneshot 00:16:34.534 tick 100 00:16:34.534 tick 100 00:16:34.534 tick 250 00:16:34.534 tick 100 00:16:34.534 tick 100 00:16:34.534 tick 100 00:16:34.534 tick 250 00:16:34.534 tick 500 00:16:34.534 tick 100 00:16:34.534 tick 100 00:16:34.534 tick 250 00:16:34.535 tick 100 00:16:34.535 tick 100 00:16:34.535 test_end 00:16:34.535 00:16:34.535 real 0m1.656s 00:16:34.535 user 0m1.172s 00:16:34.535 sys 0m0.481s 00:16:34.535 07:29:27 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:34.535 07:29:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:16:34.535 ************************************ 00:16:34.535 END TEST event_reactor 00:16:34.535 ************************************ 00:16:34.535 07:29:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:34.535 07:29:27 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:34.535 07:29:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:34.535 07:29:27 event -- common/autotest_common.sh@10 -- # set +x 00:16:34.535 ************************************ 00:16:34.535 START TEST event_reactor_perf 00:16:34.535 ************************************ 00:16:34.535 07:29:27 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:34.535 [2024-05-16 07:29:27.714238] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:34.535 [2024-05-16 07:29:27.714491] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:34.792 EAL: TSC is not safe to use in SMP mode 00:16:34.793 EAL: TSC is not invariant 00:16:34.793 [2024-05-16 07:29:28.167227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.793 [2024-05-16 07:29:28.260333] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:34.793 [2024-05-16 07:29:28.262977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.185 test_start 00:16:36.185 test_end 00:16:36.185 Performance: 3763360 events per second 00:16:36.185 00:16:36.185 real 0m1.672s 00:16:36.185 user 0m1.189s 00:16:36.185 sys 0m0.481s 00:16:36.185 07:29:29 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.185 07:29:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:16:36.185 ************************************ 00:16:36.185 END TEST event_reactor_perf 00:16:36.185 ************************************ 00:16:36.185 07:29:29 event -- event/event.sh@49 -- # uname -s 00:16:36.185 07:29:29 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:16:36.185 00:16:36.185 real 0m5.317s 00:16:36.185 user 0m6.708s 00:16:36.185 sys 0m1.700s 00:16:36.185 07:29:29 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.185 ************************************ 00:16:36.185 END TEST event 00:16:36.185 ************************************ 00:16:36.185 07:29:29 event -- common/autotest_common.sh@10 -- # set +x 00:16:36.185 07:29:29 -- spdk/autotest.sh@178 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:36.185 07:29:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:36.185 07:29:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.185 07:29:29 -- common/autotest_common.sh@10 -- # set +x 00:16:36.185 ************************************ 00:16:36.185 START TEST thread 00:16:36.185 ************************************ 00:16:36.185 07:29:29 thread -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:36.185 * Looking for test storage... 00:16:36.185 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:16:36.185 07:29:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:36.185 07:29:29 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:16:36.185 07:29:29 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.185 07:29:29 thread -- common/autotest_common.sh@10 -- # set +x 00:16:36.185 ************************************ 00:16:36.185 START TEST thread_poller_perf 00:16:36.185 ************************************ 00:16:36.185 07:29:29 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:36.185 [2024-05-16 07:29:29.673580] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:36.185 [2024-05-16 07:29:29.673861] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:36.749 EAL: TSC is not safe to use in SMP mode 00:16:36.749 EAL: TSC is not invariant 00:16:36.749 [2024-05-16 07:29:30.181486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.749 [2024-05-16 07:29:30.266610] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:36.749 [2024-05-16 07:29:30.268825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.749 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:38.124 ====================================== 00:16:38.124 busy:2101453754 (cyc) 00:16:38.124 total_run_count: 6275000 00:16:38.124 tsc_hz: 2100006180 (cyc) 00:16:38.124 ====================================== 00:16:38.124 poller_cost: 334 (cyc), 159 (nsec) 00:16:38.124 00:16:38.124 real 0m1.722s 00:16:38.124 user 0m1.173s 00:16:38.124 sys 0m0.549s 00:16:38.124 07:29:31 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:38.124 ************************************ 00:16:38.124 END TEST thread_poller_perf 00:16:38.124 ************************************ 00:16:38.124 07:29:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:38.124 07:29:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:38.124 07:29:31 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:16:38.124 07:29:31 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:38.124 07:29:31 thread -- common/autotest_common.sh@10 -- # set +x 00:16:38.124 ************************************ 00:16:38.124 START TEST thread_poller_perf 00:16:38.124 ************************************ 00:16:38.124 07:29:31 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:38.124 [2024-05-16 07:29:31.435292] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:38.124 [2024-05-16 07:29:31.435491] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:38.407 EAL: TSC is not safe to use in SMP mode 00:16:38.407 EAL: TSC is not invariant 00:16:38.407 [2024-05-16 07:29:31.907938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.664 [2024-05-16 07:29:32.001968] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:38.664 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:38.664 [2024-05-16 07:29:32.004547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.597 ====================================== 00:16:39.597 busy:2101154488 (cyc) 00:16:39.597 total_run_count: 55614000 00:16:39.597 tsc_hz: 2100006180 (cyc) 00:16:39.597 ====================================== 00:16:39.597 poller_cost: 37 (cyc), 17 (nsec) 00:16:39.597 00:16:39.597 real 0m1.696s 00:16:39.597 user 0m1.182s 00:16:39.597 sys 0m0.510s 00:16:39.597 07:29:33 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:39.597 07:29:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:39.597 ************************************ 00:16:39.597 END TEST thread_poller_perf 00:16:39.597 ************************************ 00:16:39.597 07:29:33 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:16:39.597 07:29:33 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:16:39.597 07:29:33 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:39.597 07:29:33 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:39.597 07:29:33 thread -- common/autotest_common.sh@10 -- # set +x 00:16:39.597 ************************************ 00:16:39.597 START TEST thread_spdk_lock 00:16:39.597 ************************************ 00:16:39.855 07:29:33 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:16:39.855 [2024-05-16 07:29:33.170277] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:39.855 [2024-05-16 07:29:33.170443] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:40.112 EAL: TSC is not safe to use in SMP mode 00:16:40.112 EAL: TSC is not invariant 00:16:40.112 [2024-05-16 07:29:33.629925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:40.368 [2024-05-16 07:29:33.716136] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:40.368 [2024-05-16 07:29:33.716199] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:40.368 [2024-05-16 07:29:33.719268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.368 [2024-05-16 07:29:33.719258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.726 [2024-05-16 07:29:34.161554] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:40.726 [2024-05-16 07:29:34.161617] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:16:40.726 [2024-05-16 07:29:34.161648] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x315020 00:16:40.726 [2024-05-16 07:29:34.162154] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:40.726 [2024-05-16 07:29:34.162254] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:40.726 [2024-05-16 07:29:34.162271] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:16:40.984 Starting test contend 00:16:40.984 Worker Delay Wait us Hold us Total us 00:16:40.984 0 3 262303 164410 426713 00:16:40.984 1 5 163280 265957 429238 00:16:40.984 PASS test contend 00:16:40.984 Starting test hold_by_poller 00:16:40.984 PASS test hold_by_poller 00:16:40.984 Starting test hold_by_message 00:16:40.984 PASS test hold_by_message 00:16:40.984 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:16:40.984 100014 assertions passed 00:16:40.984 0 assertions failed 00:16:40.984 00:16:40.984 real 0m1.117s 00:16:40.984 user 0m1.060s 00:16:40.984 sys 0m0.499s 00:16:40.984 07:29:34 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:40.984 ************************************ 00:16:40.984 END TEST thread_spdk_lock 00:16:40.984 07:29:34 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 ************************************ 00:16:40.984 00:16:40.984 real 0m4.858s 00:16:40.984 user 0m3.594s 00:16:40.984 sys 0m1.768s 00:16:40.984 07:29:34 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:40.984 07:29:34 thread -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 ************************************ 00:16:40.984 END TEST thread 00:16:40.984 ************************************ 00:16:40.984 07:29:34 -- spdk/autotest.sh@179 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:40.984 07:29:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:40.984 07:29:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:40.984 07:29:34 -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 ************************************ 00:16:40.984 START TEST accel 00:16:40.984 ************************************ 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:40.984 * Looking for test storage... 00:16:40.984 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:16:40.984 07:29:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:40.984 07:29:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:16:40.984 07:29:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:40.984 07:29:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=47560 00:16:40.984 07:29:34 accel -- accel/accel.sh@63 -- # waitforlisten 47560 00:16:40.984 07:29:34 accel -- accel/accel.sh@61 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.3l5nsm 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@827 -- # '[' -z 47560 ']' 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:40.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:40.984 07:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 [2024-05-16 07:29:34.548425] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:40.984 [2024-05-16 07:29:34.548638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:41.551 EAL: TSC is not safe to use in SMP mode 00:16:41.551 EAL: TSC is not invariant 00:16:41.551 [2024-05-16 07:29:35.026786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.551 [2024-05-16 07:29:35.108920] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:41.551 07:29:35 accel -- accel/accel.sh@61 -- # build_accel_config 00:16:41.551 07:29:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:41.551 07:29:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:41.551 07:29:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:41.551 07:29:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:41.551 07:29:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:41.551 07:29:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:41.551 07:29:35 accel -- accel/accel.sh@41 -- # jq -r . 00:16:41.809 [2024-05-16 07:29:35.120131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.375 07:29:35 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:42.375 07:29:35 accel -- common/autotest_common.sh@860 -- # return 0 00:16:42.375 07:29:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:42.375 07:29:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:42.375 07:29:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:42.375 07:29:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:42.375 07:29:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:42.375 07:29:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:42.375 07:29:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:42.375 07:29:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.375 07:29:35 accel -- common/autotest_common.sh@10 -- # set +x 00:16:42.375 07:29:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.375 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.375 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.375 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.375 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.375 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.375 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.375 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.375 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.375 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.375 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.375 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.375 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # IFS== 00:16:42.376 07:29:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:16:42.376 07:29:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:42.376 07:29:35 accel -- accel/accel.sh@75 -- # killprocess 47560 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@946 -- # '[' -z 47560 ']' 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@950 -- # kill -0 47560 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@951 -- # uname 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@954 -- # ps -c -o command 47560 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@954 -- # tail -1 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:16:42.376 killing process with pid 47560 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 47560' 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@965 -- # kill 47560 00:16:42.376 07:29:35 accel -- common/autotest_common.sh@970 -- # wait 47560 00:16:42.634 07:29:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:16:42.634 07:29:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:42.634 07:29:35 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:42.634 07:29:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:42.634 07:29:35 accel -- common/autotest_common.sh@10 -- # set +x 00:16:42.634 07:29:35 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:16:42.634 07:29:35 accel.accel_help -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xBun5E -h 00:16:42.634 07:29:35 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:42.634 07:29:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:16:42.634 07:29:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:42.634 07:29:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:42.634 07:29:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:42.634 07:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:16:42.634 ************************************ 00:16:42.634 START TEST accel_missing_filename 00:16:42.634 ************************************ 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:42.634 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:16:42.634 07:29:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.QAhx8U -t 1 -w compress 00:16:42.634 [2024-05-16 07:29:36.016882] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:42.634 [2024-05-16 07:29:36.017045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:43.214 EAL: TSC is not safe to use in SMP mode 00:16:43.214 EAL: TSC is not invariant 00:16:43.214 [2024-05-16 07:29:36.508641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.214 [2024-05-16 07:29:36.604652] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:16:43.215 07:29:36 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:16:43.215 [2024-05-16 07:29:36.617505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.215 [2024-05-16 07:29:36.620570] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:43.215 [2024-05-16 07:29:36.651255] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:16:43.489 A filename is required. 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:43.489 00:16:43.489 real 0m0.777s 00:16:43.489 user 0m0.226s 00:16:43.489 sys 0m0.546s 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:43.489 07:29:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:16:43.489 ************************************ 00:16:43.489 END TEST accel_missing_filename 00:16:43.489 ************************************ 00:16:43.489 07:29:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.489 07:29:36 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:16:43.489 07:29:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:43.489 07:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:16:43.489 ************************************ 00:16:43.489 START TEST accel_compress_verify 00:16:43.489 ************************************ 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:43.489 07:29:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.489 07:29:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.X4sIc0 -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:43.489 [2024-05-16 07:29:36.838565] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:43.489 [2024-05-16 07:29:36.838758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:43.747 EAL: TSC is not safe to use in SMP mode 00:16:43.747 EAL: TSC is not invariant 00:16:44.004 [2024-05-16 07:29:37.321019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.004 [2024-05-16 07:29:37.417824] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:44.004 07:29:37 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:16:44.004 [2024-05-16 07:29:37.430148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.004 [2024-05-16 07:29:37.433002] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:44.004 [2024-05-16 07:29:37.463723] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:16:44.262 00:16:44.262 Compression does not support the verify option, aborting. 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.262 00:16:44.262 real 0m0.793s 00:16:44.262 user 0m0.256s 00:16:44.262 sys 0m0.543s 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.262 07:29:37 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:16:44.262 ************************************ 00:16:44.262 END TEST accel_compress_verify 00:16:44.263 ************************************ 00:16:44.263 07:29:37 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 ************************************ 00:16:44.263 START TEST accel_wrong_workload 00:16:44.263 ************************************ 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:16:44.263 07:29:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.AEWOZv -t 1 -w foobar 00:16:44.263 Unsupported workload type: foobar 00:16:44.263 [2024-05-16 07:29:37.670745] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:44.263 accel_perf options: 00:16:44.263 [-h help message] 00:16:44.263 [-q queue depth per core] 00:16:44.263 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:44.263 [-T number of threads per core 00:16:44.263 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:44.263 [-t time in seconds] 00:16:44.263 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:44.263 [ dif_verify, , dif_generate, dif_generate_copy 00:16:44.263 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:44.263 [-l for compress/decompress workloads, name of uncompressed input file 00:16:44.263 [-S for crc32c workload, use this seed value (default 0) 00:16:44.263 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:44.263 [-f for fill workload, use this BYTE value (default 255) 00:16:44.263 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:44.263 [-y verify result if this switch is on] 00:16:44.263 [-a tasks to allocate per core (default: same value as -q)] 00:16:44.263 Can be used to spread operations across a wider range of memory. 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.263 00:16:44.263 real 0m0.010s 00:16:44.263 user 0m0.003s 00:16:44.263 sys 0m0.008s 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.263 07:29:37 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 ************************************ 00:16:44.263 END TEST accel_wrong_workload 00:16:44.263 ************************************ 00:16:44.263 07:29:37 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 ************************************ 00:16:44.263 START TEST accel_negative_buffers 00:16:44.263 ************************************ 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:16:44.263 07:29:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.nciDQh -t 1 -w xor -y -x -1 00:16:44.263 -x option must be non-negative. 00:16:44.263 [2024-05-16 07:29:37.719514] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:44.263 accel_perf options: 00:16:44.263 [-h help message] 00:16:44.263 [-q queue depth per core] 00:16:44.263 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:44.263 [-T number of threads per core 00:16:44.263 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:44.263 [-t time in seconds] 00:16:44.263 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:44.263 [ dif_verify, , dif_generate, dif_generate_copy 00:16:44.263 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:44.263 [-l for compress/decompress workloads, name of uncompressed input file 00:16:44.263 [-S for crc32c workload, use this seed value (default 0) 00:16:44.263 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:44.263 [-f for fill workload, use this BYTE value (default 255) 00:16:44.263 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:44.263 [-y verify result if this switch is on] 00:16:44.263 [-a tasks to allocate per core (default: same value as -q)] 00:16:44.263 Can be used to spread operations across a wider range of memory. 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.263 00:16:44.263 real 0m0.010s 00:16:44.263 user 0m0.009s 00:16:44.263 sys 0m0.003s 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.263 07:29:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 ************************************ 00:16:44.263 END TEST accel_negative_buffers 00:16:44.263 ************************************ 00:16:44.263 07:29:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.263 07:29:37 accel -- common/autotest_common.sh@10 -- # set +x 00:16:44.263 ************************************ 00:16:44.263 START TEST accel_crc32c 00:16:44.263 ************************************ 00:16:44.263 07:29:37 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:44.263 07:29:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:44.263 07:29:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:44.263 07:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.263 07:29:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.263 07:29:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:44.263 07:29:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.qEPpD0 -t 1 -w crc32c -S 32 -y 00:16:44.263 [2024-05-16 07:29:37.766343] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:44.263 [2024-05-16 07:29:37.766586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:44.831 EAL: TSC is not safe to use in SMP mode 00:16:44.831 EAL: TSC is not invariant 00:16:44.831 [2024-05-16 07:29:38.229690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.831 [2024-05-16 07:29:38.326337] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:44.831 [2024-05-16 07:29:38.338306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:44.831 07:29:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:46.207 07:29:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:46.208 07:29:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:46.208 00:16:46.208 real 0m1.734s 00:16:46.208 user 0m1.238s 00:16:46.208 sys 0m0.504s 00:16:46.208 07:29:39 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.208 07:29:39 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:46.208 ************************************ 00:16:46.208 END TEST accel_crc32c 00:16:46.208 ************************************ 00:16:46.208 07:29:39 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:46.208 07:29:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:16:46.208 07:29:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.208 07:29:39 accel -- common/autotest_common.sh@10 -- # set +x 00:16:46.208 ************************************ 00:16:46.208 START TEST accel_crc32c_C2 00:16:46.208 ************************************ 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:46.208 07:29:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PuLqrk -t 1 -w crc32c -y -C 2 00:16:46.208 [2024-05-16 07:29:39.543153] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:46.208 [2024-05-16 07:29:39.543389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:46.774 EAL: TSC is not safe to use in SMP mode 00:16:46.774 EAL: TSC is not invariant 00:16:46.774 [2024-05-16 07:29:40.045258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.774 [2024-05-16 07:29:40.129185] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:46.774 [2024-05-16 07:29:40.143271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.774 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:46.775 07:29:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:48.149 00:16:48.149 real 0m1.763s 00:16:48.149 user 0m1.238s 00:16:48.149 sys 0m0.536s 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:48.149 ************************************ 00:16:48.149 END TEST accel_crc32c_C2 00:16:48.149 ************************************ 00:16:48.149 07:29:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 07:29:41 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:48.149 07:29:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:48.149 07:29:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:48.149 07:29:41 accel -- common/autotest_common.sh@10 -- # set +x 00:16:48.149 ************************************ 00:16:48.149 START TEST accel_copy 00:16:48.149 ************************************ 00:16:48.149 07:29:41 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:16:48.149 07:29:41 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:48.149 07:29:41 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:16:48.149 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.149 07:29:41 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:48.149 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.149 07:29:41 accel.accel_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.od5saa -t 1 -w copy -y 00:16:48.149 [2024-05-16 07:29:41.345580] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:48.149 [2024-05-16 07:29:41.345758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:48.427 EAL: TSC is not safe to use in SMP mode 00:16:48.427 EAL: TSC is not invariant 00:16:48.427 [2024-05-16 07:29:41.811739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.427 [2024-05-16 07:29:41.905680] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:16:48.427 [2024-05-16 07:29:41.916556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:48.427 07:29:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:48.428 07:29:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:48.428 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:48.428 07:29:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:49.811 07:29:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:49.811 00:16:49.811 real 0m1.735s 00:16:49.811 user 0m1.241s 00:16:49.811 sys 0m0.505s 00:16:49.811 07:29:43 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:49.811 07:29:43 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:16:49.811 ************************************ 00:16:49.811 END TEST accel_copy 00:16:49.811 ************************************ 00:16:49.811 07:29:43 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:49.811 07:29:43 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:16:49.811 07:29:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:49.811 07:29:43 accel -- common/autotest_common.sh@10 -- # set +x 00:16:49.811 ************************************ 00:16:49.811 START TEST accel_fill 00:16:49.811 ************************************ 00:16:49.811 07:29:43 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:49.811 07:29:43 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:16:49.811 07:29:43 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:16:49.811 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:49.811 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:49.811 07:29:43 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:49.811 07:29:43 accel.accel_fill -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.gV4Jlj -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:49.812 [2024-05-16 07:29:43.119587] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:49.812 [2024-05-16 07:29:43.119837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:50.376 EAL: TSC is not safe to use in SMP mode 00:16:50.376 EAL: TSC is not invariant 00:16:50.376 [2024-05-16 07:29:43.646207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.376 [2024-05-16 07:29:43.732096] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:16:50.376 [2024-05-16 07:29:43.743691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.376 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:50.377 07:29:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:51.378 07:29:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:51.378 00:16:51.378 real 0m1.809s 00:16:51.378 user 0m1.262s 00:16:51.378 sys 0m0.560s 00:16:51.378 07:29:44 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.378 07:29:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:16:51.378 ************************************ 00:16:51.378 END TEST accel_fill 00:16:51.378 ************************************ 00:16:51.636 07:29:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:51.636 07:29:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:51.636 07:29:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:51.636 07:29:44 accel -- common/autotest_common.sh@10 -- # set +x 00:16:51.636 ************************************ 00:16:51.636 START TEST accel_copy_crc32c 00:16:51.636 ************************************ 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:51.636 07:29:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.rTubyz -t 1 -w copy_crc32c -y 00:16:51.636 [2024-05-16 07:29:44.969386] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:51.636 [2024-05-16 07:29:44.969553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:51.894 EAL: TSC is not safe to use in SMP mode 00:16:51.894 EAL: TSC is not invariant 00:16:51.894 [2024-05-16 07:29:45.446414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.153 [2024-05-16 07:29:45.541612] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:52.153 [2024-05-16 07:29:45.553693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.153 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:52.154 07:29:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.530 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:53.531 ************************************ 00:16:53.531 END TEST accel_copy_crc32c 00:16:53.531 ************************************ 00:16:53.531 00:16:53.531 real 0m1.748s 00:16:53.531 user 0m1.240s 00:16:53.531 sys 0m0.518s 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:53.531 07:29:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:53.531 07:29:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:53.531 07:29:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:16:53.531 07:29:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.531 07:29:46 accel -- common/autotest_common.sh@10 -- # set +x 00:16:53.531 ************************************ 00:16:53.531 START TEST accel_copy_crc32c_C2 00:16:53.531 ************************************ 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:53.531 07:29:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xJJ16T -t 1 -w copy_crc32c -y -C 2 00:16:53.531 [2024-05-16 07:29:46.761330] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:53.531 [2024-05-16 07:29:46.761535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:53.789 EAL: TSC is not safe to use in SMP mode 00:16:53.789 EAL: TSC is not invariant 00:16:53.789 [2024-05-16 07:29:47.216073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.789 [2024-05-16 07:29:47.305403] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:53.789 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:53.789 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:53.789 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:53.789 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:53.790 [2024-05-16 07:29:47.316094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:53.790 07:29:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:55.166 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:55.167 00:16:55.167 real 0m1.720s 00:16:55.167 user 0m1.230s 00:16:55.167 sys 0m0.506s 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:55.167 07:29:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:55.167 ************************************ 00:16:55.167 END TEST accel_copy_crc32c_C2 00:16:55.167 ************************************ 00:16:55.167 07:29:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:55.167 07:29:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:55.167 07:29:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:55.167 07:29:48 accel -- common/autotest_common.sh@10 -- # set +x 00:16:55.167 ************************************ 00:16:55.167 START TEST accel_dualcast 00:16:55.167 ************************************ 00:16:55.167 07:29:48 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:16:55.167 07:29:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:16:55.167 07:29:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:16:55.167 07:29:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.167 07:29:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.167 07:29:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:55.167 07:29:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.KY19Z4 -t 1 -w dualcast -y 00:16:55.167 [2024-05-16 07:29:48.519814] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:55.167 [2024-05-16 07:29:48.520174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:55.732 EAL: TSC is not safe to use in SMP mode 00:16:55.732 EAL: TSC is not invariant 00:16:55.733 [2024-05-16 07:29:49.032861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.733 [2024-05-16 07:29:49.137429] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:16:55.733 [2024-05-16 07:29:49.149219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:55.733 07:29:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:57.109 07:29:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:57.109 00:16:57.109 real 0m1.792s 00:16:57.109 user 0m1.248s 00:16:57.109 sys 0m0.553s 00:16:57.109 07:29:50 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:57.109 ************************************ 00:16:57.109 END TEST accel_dualcast 00:16:57.109 ************************************ 00:16:57.109 07:29:50 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:16:57.109 07:29:50 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:57.109 07:29:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:57.109 07:29:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:57.109 07:29:50 accel -- common/autotest_common.sh@10 -- # set +x 00:16:57.109 ************************************ 00:16:57.109 START TEST accel_compare 00:16:57.109 ************************************ 00:16:57.109 07:29:50 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:16:57.109 07:29:50 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:16:57.109 07:29:50 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:16:57.109 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.109 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.109 07:29:50 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:57.109 07:29:50 accel.accel_compare -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7tqAYq -t 1 -w compare -y 00:16:57.109 [2024-05-16 07:29:50.349485] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:57.109 [2024-05-16 07:29:50.349770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:57.368 EAL: TSC is not safe to use in SMP mode 00:16:57.368 EAL: TSC is not invariant 00:16:57.368 [2024-05-16 07:29:50.796583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.368 [2024-05-16 07:29:50.881350] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:16:57.368 [2024-05-16 07:29:50.893394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:57.368 07:29:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:58.748 07:29:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:58.748 00:16:58.748 real 0m1.711s 00:16:58.748 user 0m1.211s 00:16:58.748 sys 0m0.511s 00:16:58.748 07:29:52 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:58.748 07:29:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:16:58.748 ************************************ 00:16:58.748 END TEST accel_compare 00:16:58.748 ************************************ 00:16:58.748 07:29:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:58.748 07:29:52 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:16:58.748 07:29:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:58.748 07:29:52 accel -- common/autotest_common.sh@10 -- # set +x 00:16:58.748 ************************************ 00:16:58.748 START TEST accel_xor 00:16:58.748 ************************************ 00:16:58.748 07:29:52 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:16:58.748 07:29:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:58.748 07:29:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:58.749 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:58.749 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:58.749 07:29:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:58.749 07:29:52 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xGQlbm -t 1 -w xor -y 00:16:58.749 [2024-05-16 07:29:52.101426] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:58.749 [2024-05-16 07:29:52.101745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:59.315 EAL: TSC is not safe to use in SMP mode 00:16:59.315 EAL: TSC is not invariant 00:16:59.315 [2024-05-16 07:29:52.579406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.315 [2024-05-16 07:29:52.661691] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:59.315 [2024-05-16 07:29:52.672326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:59.315 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:59.316 07:29:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:00.691 00:17:00.691 real 0m1.732s 00:17:00.691 user 0m1.221s 00:17:00.691 sys 0m0.520s 00:17:00.691 07:29:53 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:00.691 07:29:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:17:00.691 ************************************ 00:17:00.691 END TEST accel_xor 00:17:00.691 ************************************ 00:17:00.691 07:29:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:17:00.691 07:29:53 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:17:00.691 07:29:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:00.691 07:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:17:00.691 ************************************ 00:17:00.691 START TEST accel_xor 00:17:00.691 ************************************ 00:17:00.691 07:29:53 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:17:00.691 07:29:53 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Ws6XHq -t 1 -w xor -y -x 3 00:17:00.691 [2024-05-16 07:29:53.874989] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:00.691 [2024-05-16 07:29:53.875172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:00.951 EAL: TSC is not safe to use in SMP mode 00:17:00.951 EAL: TSC is not invariant 00:17:00.951 [2024-05-16 07:29:54.358374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.951 [2024-05-16 07:29:54.438411] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:17:00.951 [2024-05-16 07:29:54.450965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:00.951 07:29:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:02.327 07:29:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:02.327 00:17:02.327 real 0m1.737s 00:17:02.327 user 0m1.208s 00:17:02.327 sys 0m0.534s 00:17:02.327 07:29:55 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.327 ************************************ 00:17:02.327 END TEST accel_xor 00:17:02.327 ************************************ 00:17:02.327 07:29:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:17:02.327 07:29:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:17:02.327 07:29:55 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:17:02.327 07:29:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.327 07:29:55 accel -- common/autotest_common.sh@10 -- # set +x 00:17:02.327 ************************************ 00:17:02.327 START TEST accel_dif_verify 00:17:02.327 ************************************ 00:17:02.327 07:29:55 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:17:02.327 07:29:55 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:17:02.327 07:29:55 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:17:02.327 07:29:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.327 07:29:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.327 07:29:55 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:17:02.327 07:29:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.51dpnW -t 1 -w dif_verify 00:17:02.327 [2024-05-16 07:29:55.655194] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:02.327 [2024-05-16 07:29:55.655446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:02.586 EAL: TSC is not safe to use in SMP mode 00:17:02.586 EAL: TSC is not invariant 00:17:02.844 [2024-05-16 07:29:56.157292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.844 [2024-05-16 07:29:56.252468] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:17:02.844 [2024-05-16 07:29:56.265812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:02.844 07:29:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:17:04.218 07:29:57 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:04.218 00:17:04.218 real 0m1.776s 00:17:04.218 user 0m1.218s 00:17:04.218 sys 0m0.567s 00:17:04.218 07:29:57 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:04.218 ************************************ 00:17:04.218 END TEST accel_dif_verify 00:17:04.218 ************************************ 00:17:04.218 07:29:57 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:17:04.218 07:29:57 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:17:04.218 07:29:57 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:17:04.218 07:29:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:04.218 07:29:57 accel -- common/autotest_common.sh@10 -- # set +x 00:17:04.218 ************************************ 00:17:04.218 START TEST accel_dif_generate 00:17:04.218 ************************************ 00:17:04.218 07:29:57 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:17:04.218 07:29:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:17:04.218 07:29:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:17:04.218 07:29:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.218 07:29:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.218 07:29:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:17:04.218 07:29:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.14YmF4 -t 1 -w dif_generate 00:17:04.218 [2024-05-16 07:29:57.467465] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:04.218 [2024-05-16 07:29:57.467634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:04.475 EAL: TSC is not safe to use in SMP mode 00:17:04.475 EAL: TSC is not invariant 00:17:04.475 [2024-05-16 07:29:57.956393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.734 [2024-05-16 07:29:58.049015] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:17:04.734 [2024-05-16 07:29:58.060673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:04.734 07:29:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:17:05.669 07:29:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:05.669 00:17:05.669 real 0m1.760s 00:17:05.669 user 0m1.249s 00:17:05.669 sys 0m0.523s 00:17:05.669 07:29:59 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:05.669 ************************************ 00:17:05.669 END TEST accel_dif_generate 00:17:05.669 ************************************ 00:17:05.669 07:29:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:17:05.927 07:29:59 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:17:05.927 07:29:59 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:17:05.927 07:29:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:05.927 07:29:59 accel -- common/autotest_common.sh@10 -- # set +x 00:17:05.927 ************************************ 00:17:05.927 START TEST accel_dif_generate_copy 00:17:05.927 ************************************ 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:17:05.927 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.9hPIXZ -t 1 -w dif_generate_copy 00:17:05.927 [2024-05-16 07:29:59.268528] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:05.927 [2024-05-16 07:29:59.268695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:06.506 EAL: TSC is not safe to use in SMP mode 00:17:06.506 EAL: TSC is not invariant 00:17:06.506 [2024-05-16 07:29:59.768174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.506 [2024-05-16 07:29:59.863080] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:17:06.506 [2024-05-16 07:29:59.876277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:06.506 07:29:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:07.461 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:17:07.462 07:30:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:07.462 00:17:07.462 real 0m1.762s 00:17:07.462 user 0m1.221s 00:17:07.462 sys 0m0.550s 00:17:07.462 07:30:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:07.462 07:30:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:17:07.462 ************************************ 00:17:07.462 END TEST accel_dif_generate_copy 00:17:07.462 ************************************ 00:17:07.719 07:30:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:17:07.719 07:30:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:07.719 07:30:01 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:17:07.719 07:30:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:07.719 07:30:01 accel -- common/autotest_common.sh@10 -- # set +x 00:17:07.719 ************************************ 00:17:07.719 START TEST accel_comp 00:17:07.719 ************************************ 00:17:07.719 07:30:01 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:07.719 07:30:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:17:07.719 07:30:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:17:07.719 07:30:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:07.719 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:07.719 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:07.719 07:30:01 accel.accel_comp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.GzPjVV -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:07.719 [2024-05-16 07:30:01.072078] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:07.719 [2024-05-16 07:30:01.072241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:07.977 EAL: TSC is not safe to use in SMP mode 00:17:07.977 EAL: TSC is not invariant 00:17:07.977 [2024-05-16 07:30:01.541972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.235 [2024-05-16 07:30:01.623146] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:17:08.235 [2024-05-16 07:30:01.630793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.235 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:08.236 07:30:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:17:09.636 07:30:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:09.636 00:17:09.636 real 0m1.721s 00:17:09.636 user 0m1.226s 00:17:09.636 sys 0m0.507s 00:17:09.636 07:30:02 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.636 ************************************ 00:17:09.636 END TEST accel_comp 00:17:09.636 ************************************ 00:17:09.636 07:30:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:17:09.636 07:30:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:09.636 07:30:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:17:09.636 07:30:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.636 07:30:02 accel -- common/autotest_common.sh@10 -- # set +x 00:17:09.636 ************************************ 00:17:09.636 START TEST accel_decomp 00:17:09.636 ************************************ 00:17:09.636 07:30:02 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:09.636 07:30:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:17:09.636 07:30:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:17:09.636 07:30:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.636 07:30:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:09.636 07:30:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.636 07:30:02 accel.accel_decomp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wRdxpP -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:09.636 [2024-05-16 07:30:02.828678] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:09.636 [2024-05-16 07:30:02.828868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:09.895 EAL: TSC is not safe to use in SMP mode 00:17:09.895 EAL: TSC is not invariant 00:17:09.895 [2024-05-16 07:30:03.273776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.895 [2024-05-16 07:30:03.356123] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:17:09.895 [2024-05-16 07:30:03.365588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:09.895 07:30:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:11.269 07:30:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:11.269 00:17:11.269 real 0m1.698s 00:17:11.269 user 0m1.222s 00:17:11.269 sys 0m0.489s 00:17:11.269 ************************************ 00:17:11.269 END TEST accel_decomp 00:17:11.269 ************************************ 00:17:11.269 07:30:04 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:11.269 07:30:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:17:11.269 07:30:04 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:11.269 07:30:04 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:17:11.269 07:30:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:11.269 07:30:04 accel -- common/autotest_common.sh@10 -- # set +x 00:17:11.269 ************************************ 00:17:11.269 START TEST accel_decmop_full 00:17:11.269 ************************************ 00:17:11.269 07:30:04 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:11.269 07:30:04 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:17:11.269 07:30:04 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:17:11.269 07:30:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.269 07:30:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.269 07:30:04 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:11.269 07:30:04 accel.accel_decmop_full -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Q93D4g -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:11.269 [2024-05-16 07:30:04.569210] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:11.269 [2024-05-16 07:30:04.569459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:11.527 EAL: TSC is not safe to use in SMP mode 00:17:11.527 EAL: TSC is not invariant 00:17:11.527 [2024-05-16 07:30:05.010605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.527 [2024-05-16 07:30:05.091780] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:17:11.785 [2024-05-16 07:30:05.102239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:11.785 07:30:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:12.722 07:30:06 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:12.722 00:17:12.722 real 0m1.711s 00:17:12.722 user 0m1.233s 00:17:12.722 sys 0m0.487s 00:17:12.722 07:30:06 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:12.722 07:30:06 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:17:12.722 ************************************ 00:17:12.722 END TEST accel_decmop_full 00:17:12.722 ************************************ 00:17:12.981 07:30:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:12.981 07:30:06 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:17:12.981 07:30:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:12.981 07:30:06 accel -- common/autotest_common.sh@10 -- # set +x 00:17:12.981 ************************************ 00:17:12.981 START TEST accel_decomp_mcore 00:17:12.981 ************************************ 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:12.981 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.OcBZZ9 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:12.981 [2024-05-16 07:30:06.316396] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:12.981 [2024-05-16 07:30:06.316595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:13.239 EAL: TSC is not safe to use in SMP mode 00:17:13.239 EAL: TSC is not invariant 00:17:13.239 [2024-05-16 07:30:06.778556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.500 [2024-05-16 07:30:06.858382] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:13.500 [2024-05-16 07:30:06.858441] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:13.500 [2024-05-16 07:30:06.858448] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:17:13.500 [2024-05-16 07:30:06.858455] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:17:13.500 [2024-05-16 07:30:06.873832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.500 [2024-05-16 07:30:06.873703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.500 [2024-05-16 07:30:06.873757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.500 [2024-05-16 07:30:06.873829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.500 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:13.501 07:30:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:14.460 00:17:14.460 real 0m1.719s 00:17:14.460 user 0m4.312s 00:17:14.460 sys 0m0.522s 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:14.460 07:30:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:17:14.460 ************************************ 00:17:14.460 END TEST accel_decomp_mcore 00:17:14.460 ************************************ 00:17:14.732 07:30:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:14.732 07:30:08 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:17:14.732 07:30:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:14.732 07:30:08 accel -- common/autotest_common.sh@10 -- # set +x 00:17:14.732 ************************************ 00:17:14.732 START TEST accel_decomp_full_mcore 00:17:14.732 ************************************ 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:14.732 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mamdY6 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:14.732 [2024-05-16 07:30:08.074629] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:14.732 [2024-05-16 07:30:08.074878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:15.005 EAL: TSC is not safe to use in SMP mode 00:17:15.005 EAL: TSC is not invariant 00:17:15.005 [2024-05-16 07:30:08.537551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.271 [2024-05-16 07:30:08.625762] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:15.271 [2024-05-16 07:30:08.625841] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:15.271 [2024-05-16 07:30:08.625855] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:17:15.271 [2024-05-16 07:30:08.625868] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:17:15.271 [2024-05-16 07:30:08.636512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.271 [2024-05-16 07:30:08.636657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.271 [2024-05-16 07:30:08.636581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.271 [2024-05-16 07:30:08.636653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.271 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:15.272 07:30:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.245 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.536 00:17:16.536 real 0m1.736s 00:17:16.536 user 0m4.358s 00:17:16.536 sys 0m0.521s 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:16.536 ************************************ 00:17:16.536 END TEST accel_decomp_full_mcore 00:17:16.536 ************************************ 00:17:16.536 07:30:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:17:16.536 07:30:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:16.536 07:30:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:17:16.536 07:30:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:16.536 07:30:09 accel -- common/autotest_common.sh@10 -- # set +x 00:17:16.536 ************************************ 00:17:16.536 START TEST accel_decomp_mthread 00:17:16.536 ************************************ 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:16.536 07:30:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.N2W2nh -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:16.536 [2024-05-16 07:30:09.847020] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:16.536 [2024-05-16 07:30:09.847275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:16.794 EAL: TSC is not safe to use in SMP mode 00:17:16.794 EAL: TSC is not invariant 00:17:16.794 [2024-05-16 07:30:10.296841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.052 [2024-05-16 07:30:10.391239] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:17:17.052 [2024-05-16 07:30:10.401029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:17.052 07:30:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:18.012 00:17:18.012 real 0m1.717s 00:17:18.012 user 0m1.224s 00:17:18.012 sys 0m0.503s 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.012 ************************************ 00:17:18.012 END TEST accel_decomp_mthread 00:17:18.012 ************************************ 00:17:18.012 07:30:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:17:18.267 07:30:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:18.267 07:30:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:17:18.267 07:30:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.267 07:30:11 accel -- common/autotest_common.sh@10 -- # set +x 00:17:18.267 ************************************ 00:17:18.267 START TEST accel_decomp_full_mthread 00:17:18.267 ************************************ 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:18.267 07:30:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yMWsTA -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:18.267 [2024-05-16 07:30:11.608239] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:18.267 [2024-05-16 07:30:11.608383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:18.524 EAL: TSC is not safe to use in SMP mode 00:17:18.524 EAL: TSC is not invariant 00:17:18.524 [2024-05-16 07:30:12.059210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.829 [2024-05-16 07:30:12.138458] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:17:18.829 [2024-05-16 07:30:12.146799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:18.829 07:30:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:19.785 00:17:19.785 real 0m1.734s 00:17:19.785 user 0m1.241s 00:17:19.785 sys 0m0.506s 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:19.785 07:30:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:17:19.785 ************************************ 00:17:19.785 END TEST accel_decomp_full_mthread 00:17:19.785 ************************************ 00:17:20.043 07:30:13 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:17:20.043 07:30:13 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.5zHETD 00:17:20.043 07:30:13 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:20.043 07:30:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:20.043 07:30:13 accel -- common/autotest_common.sh@10 -- # set +x 00:17:20.043 ************************************ 00:17:20.043 START TEST accel_dif_functional_tests 00:17:20.043 ************************************ 00:17:20.043 07:30:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.5zHETD 00:17:20.043 [2024-05-16 07:30:13.384040] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:20.043 [2024-05-16 07:30:13.384384] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:20.302 EAL: TSC is not safe to use in SMP mode 00:17:20.302 EAL: TSC is not invariant 00:17:20.302 [2024-05-16 07:30:13.836947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.560 [2024-05-16 07:30:13.929443] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:20.560 [2024-05-16 07:30:13.929503] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:20.560 [2024-05-16 07:30:13.929515] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:17:20.560 07:30:13 accel -- accel/accel.sh@137 -- # build_accel_config 00:17:20.560 07:30:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:20.560 07:30:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:20.560 07:30:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:20.560 07:30:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:20.560 07:30:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:20.560 07:30:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:17:20.560 07:30:13 accel -- accel/accel.sh@41 -- # jq -r . 00:17:20.560 [2024-05-16 07:30:13.940581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.560 [2024-05-16 07:30:13.940530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.560 [2024-05-16 07:30:13.940574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.560 00:17:20.560 00:17:20.560 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.560 http://cunit.sourceforge.net/ 00:17:20.560 00:17:20.560 00:17:20.560 Suite: accel_dif 00:17:20.560 Test: verify: DIF generated, GUARD check ...passed 00:17:20.560 Test: verify: DIF generated, APPTAG check ...passed 00:17:20.560 Test: verify: DIF generated, REFTAG check ...passed 00:17:20.560 Test: verify: DIF not generated, GUARD check ...passed 00:17:20.560 Test: verify: DIF not generated, APPTAG check ...passed 00:17:20.560 Test: verify: DIF not generated, REFTAG check ...passed 00:17:20.560 Test: verify: APPTAG correct, APPTAG check ...passed 00:17:20.560 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:17:20.560 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:17:20.560 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:17:20.560 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-05-16 07:30:13.956406] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:20.560 [2024-05-16 07:30:13.956460] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:20.560 [2024-05-16 07:30:13.956494] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:20.560 [2024-05-16 07:30:13.956519] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:20.560 [2024-05-16 07:30:13.956538] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:20.560 [2024-05-16 07:30:13.956567] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:20.560 [2024-05-16 07:30:13.956604] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:17:20.560 passed 00:17:20.560 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:17:20.560 Test: generate copy: DIF generated, GUARD check ...passed 00:17:20.560 Test: generate copy: DIF generated, APTTAG check ...passed 00:17:20.560 Test: generate copy: DIF generated, REFTAG check ...passed 00:17:20.560 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:17:20.560 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:17:20.560 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:17:20.560 Test: generate copy: iovecs-len validate ...passed 00:17:20.560 Test: generate copy: buffer alignment validate ...passed 00:17:20.560 00:17:20.560 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.560 suites 1 1 n/a 0 0 00:17:20.560 tests 20 20 20 0 0 00:17:20.560 asserts 204 204 204 0 n/a 00:17:20.560 00:17:20.560 Elapsed time = 0.000 seconds 00:17:20.560 [2024-05-16 07:30:13.956703] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:17:20.560 [2024-05-16 07:30:13.956866] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:17:20.874 00:17:20.874 real 0m0.763s 00:17:20.874 user 0m0.413s 00:17:20.874 sys 0m0.490s 00:17:20.874 07:30:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:20.874 07:30:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 ************************************ 00:17:20.874 END TEST accel_dif_functional_tests 00:17:20.874 ************************************ 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:17:20.874 00:17:20.874 real 0m39.818s 00:17:20.874 user 0m33.708s 00:17:20.874 sys 0m13.242s 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:20.874 07:30:14 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:17:20.874 07:30:14 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:20.874 07:30:14 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:17:20.874 07:30:14 accel -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 ************************************ 00:17:20.874 END TEST accel 00:17:20.874 ************************************ 00:17:20.874 07:30:14 -- spdk/autotest.sh@180 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:20.874 07:30:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:20.874 07:30:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:20.874 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 ************************************ 00:17:20.874 START TEST accel_rpc 00:17:20.874 ************************************ 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:20.874 * Looking for test storage... 00:17:20.874 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:17:20.874 07:30:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:20.874 07:30:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=48314 00:17:20.874 07:30:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:20.874 07:30:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 48314 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 48314 ']' 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:20.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:20.874 07:30:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.874 [2024-05-16 07:30:14.367180] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:20.874 [2024-05-16 07:30:14.367362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:21.439 EAL: TSC is not safe to use in SMP mode 00:17:21.439 EAL: TSC is not invariant 00:17:21.439 [2024-05-16 07:30:14.807935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.439 [2024-05-16 07:30:14.886019] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:21.439 [2024-05-16 07:30:14.888179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:17:22.006 07:30:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:17:22.006 07:30:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:17:22.006 07:30:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:17:22.006 07:30:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:17:22.006 07:30:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 ************************************ 00:17:22.006 START TEST accel_assign_opcode 00:17:22.006 ************************************ 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 [2024-05-16 07:30:15.464463] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 [2024-05-16 07:30:15.472458] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.006 software 00:17:22.006 00:17:22.006 real 0m0.072s 00:17:22.006 user 0m0.003s 00:17:22.006 sys 0m0.018s 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.006 07:30:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:17:22.006 ************************************ 00:17:22.006 END TEST accel_assign_opcode 00:17:22.006 ************************************ 00:17:22.006 07:30:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 48314 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 48314 ']' 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 48314 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 48314 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@954 -- # tail -1 00:17:22.006 07:30:15 accel_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:17:22.264 07:30:15 accel_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:17:22.264 killing process with pid 48314 00:17:22.264 07:30:15 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48314' 00:17:22.264 07:30:15 accel_rpc -- common/autotest_common.sh@965 -- # kill 48314 00:17:22.264 07:30:15 accel_rpc -- common/autotest_common.sh@970 -- # wait 48314 00:17:22.264 00:17:22.264 real 0m1.609s 00:17:22.264 user 0m1.669s 00:17:22.264 sys 0m0.648s 00:17:22.264 07:30:15 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.264 07:30:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.264 ************************************ 00:17:22.264 END TEST accel_rpc 00:17:22.264 ************************************ 00:17:22.522 07:30:15 -- spdk/autotest.sh@181 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:22.522 07:30:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:22.522 07:30:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.522 07:30:15 -- common/autotest_common.sh@10 -- # set +x 00:17:22.522 ************************************ 00:17:22.522 START TEST app_cmdline 00:17:22.522 ************************************ 00:17:22.522 07:30:15 app_cmdline -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:22.522 * Looking for test storage... 00:17:22.522 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:17:22.522 07:30:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:22.522 07:30:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=48392 00:17:22.522 07:30:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 48392 00:17:22.522 07:30:16 app_cmdline -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:22.522 07:30:16 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 48392 ']' 00:17:22.522 07:30:16 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.522 07:30:16 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.522 07:30:16 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.522 07:30:16 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.522 07:30:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:22.522 [2024-05-16 07:30:16.026662] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:22.522 [2024-05-16 07:30:16.026928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:23.087 EAL: TSC is not safe to use in SMP mode 00:17:23.087 EAL: TSC is not invariant 00:17:23.087 [2024-05-16 07:30:16.468423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.087 [2024-05-16 07:30:16.562315] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:23.087 [2024-05-16 07:30:16.565008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.694 07:30:17 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.694 07:30:17 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:17:23.694 07:30:17 app_cmdline -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:23.977 { 00:17:23.977 "version": "SPDK v24.05-pre git sha1 cc94f3031", 00:17:23.977 "fields": { 00:17:23.977 "major": 24, 00:17:23.977 "minor": 5, 00:17:23.977 "patch": 0, 00:17:23.977 "suffix": "-pre", 00:17:23.977 "commit": "cc94f3031" 00:17:23.977 } 00:17:23.977 } 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:23.977 07:30:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:23.977 07:30:17 app_cmdline -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:24.235 request: 00:17:24.235 { 00:17:24.235 "method": "env_dpdk_get_mem_stats", 00:17:24.235 "req_id": 1 00:17:24.235 } 00:17:24.235 Got JSON-RPC error response 00:17:24.235 response: 00:17:24.235 { 00:17:24.235 "code": -32601, 00:17:24.235 "message": "Method not found" 00:17:24.235 } 00:17:24.235 07:30:17 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:17:24.235 07:30:17 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.235 07:30:17 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.235 07:30:17 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.235 07:30:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 48392 00:17:24.235 07:30:17 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 48392 ']' 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 48392 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@954 -- # tail -1 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@954 -- # ps -c -o command 48392 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:17:24.236 killing process with pid 48392 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48392' 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@965 -- # kill 48392 00:17:24.236 07:30:17 app_cmdline -- common/autotest_common.sh@970 -- # wait 48392 00:17:24.495 00:17:24.495 real 0m1.958s 00:17:24.495 user 0m2.317s 00:17:24.495 sys 0m0.717s 00:17:24.495 07:30:17 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:24.495 07:30:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:24.495 ************************************ 00:17:24.495 END TEST app_cmdline 00:17:24.495 ************************************ 00:17:24.495 07:30:17 -- spdk/autotest.sh@182 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:24.495 07:30:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:24.495 07:30:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:24.495 07:30:17 -- common/autotest_common.sh@10 -- # set +x 00:17:24.495 ************************************ 00:17:24.495 START TEST version 00:17:24.495 ************************************ 00:17:24.495 07:30:17 version -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:24.495 * Looking for test storage... 00:17:24.495 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:17:24.495 07:30:18 version -- app/version.sh@17 -- # get_header_version major 00:17:24.495 07:30:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # cut -f2 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # tr -d '"' 00:17:24.495 07:30:18 version -- app/version.sh@17 -- # major=24 00:17:24.495 07:30:18 version -- app/version.sh@18 -- # get_header_version minor 00:17:24.495 07:30:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # cut -f2 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # tr -d '"' 00:17:24.495 07:30:18 version -- app/version.sh@18 -- # minor=5 00:17:24.495 07:30:18 version -- app/version.sh@19 -- # get_header_version patch 00:17:24.495 07:30:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # cut -f2 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # tr -d '"' 00:17:24.495 07:30:18 version -- app/version.sh@19 -- # patch=0 00:17:24.495 07:30:18 version -- app/version.sh@20 -- # get_header_version suffix 00:17:24.495 07:30:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # cut -f2 00:17:24.495 07:30:18 version -- app/version.sh@14 -- # tr -d '"' 00:17:24.495 07:30:18 version -- app/version.sh@20 -- # suffix=-pre 00:17:24.495 07:30:18 version -- app/version.sh@22 -- # version=24.5 00:17:24.495 07:30:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:24.495 07:30:18 version -- app/version.sh@28 -- # version=24.5rc0 00:17:24.495 07:30:18 version -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:17:24.495 07:30:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:24.754 07:30:18 version -- app/version.sh@30 -- # py_version=24.5rc0 00:17:24.754 07:30:18 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:17:24.754 00:17:24.754 real 0m0.221s 00:17:24.754 user 0m0.139s 00:17:24.754 sys 0m0.171s 00:17:24.754 07:30:18 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:24.754 07:30:18 version -- common/autotest_common.sh@10 -- # set +x 00:17:24.754 ************************************ 00:17:24.754 END TEST version 00:17:24.754 ************************************ 00:17:24.754 07:30:18 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:17:24.754 07:30:18 -- spdk/autotest.sh@185 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:17:24.754 07:30:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:24.754 07:30:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:24.754 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:17:24.754 ************************************ 00:17:24.754 START TEST blockdev_general 00:17:24.754 ************************************ 00:17:24.754 07:30:18 blockdev_general -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:17:24.754 * Looking for test storage... 00:17:24.754 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:24.754 07:30:18 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:24.754 07:30:18 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=48527 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:25.013 07:30:18 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 48527 00:17:25.013 07:30:18 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 48527 ']' 00:17:25.013 07:30:18 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.013 07:30:18 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.013 07:30:18 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.013 07:30:18 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.013 07:30:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:25.013 [2024-05-16 07:30:18.339896] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:25.013 [2024-05-16 07:30:18.340114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:25.579 EAL: TSC is not safe to use in SMP mode 00:17:25.579 EAL: TSC is not invariant 00:17:25.579 [2024-05-16 07:30:18.851897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.579 [2024-05-16 07:30:18.945567] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:25.579 [2024-05-16 07:30:18.948213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 [2024-05-16 07:30:19.489394] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:26.156 [2024-05-16 07:30:19.489441] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:26.156 00:17:26.156 [2024-05-16 07:30:19.497386] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:26.156 [2024-05-16 07:30:19.497416] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:26.156 00:17:26.156 Malloc0 00:17:26.156 Malloc1 00:17:26.156 Malloc2 00:17:26.156 Malloc3 00:17:26.156 Malloc4 00:17:26.156 Malloc5 00:17:26.156 Malloc6 00:17:26.156 Malloc7 00:17:26.156 Malloc8 00:17:26.156 Malloc9 00:17:26.156 [2024-05-16 07:30:19.585394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:26.156 [2024-05-16 07:30:19.585442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.156 [2024-05-16 07:30:19.585465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf3b980 00:17:26.156 [2024-05-16 07:30:19.585488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.156 [2024-05-16 07:30:19.585870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.156 [2024-05-16 07:30:19.585907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:26.156 TestPT 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:17:26.156 5000+0 records in 00:17:26.156 5000+0 records out 00:17:26.156 10240000 bytes transferred in 0.034381 secs (297836177 bytes/sec) 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 AIO0 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:17:26.156 07:30:19 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.156 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.157 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.157 07:30:19 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:17:26.157 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.157 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.455 07:30:19 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.455 07:30:19 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:17:26.455 07:30:19 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:17:26.455 07:30:19 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.455 07:30:19 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.455 07:30:19 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:17:26.455 07:30:19 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:17:26.456 07:30:19 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "25dbd62b-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "25dbd62b-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "54326866-d4eb-2250-b65f-69b9cfa70e0e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "54326866-d4eb-2250-b65f-69b9cfa70e0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "fc303ed6-1f43-f057-89b1-4ad1e1833770"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fc303ed6-1f43-f057-89b1-4ad1e1833770",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "18976896-5a45-f856-b8c5-5363a9f285ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "18976896-5a45-f856-b8c5-5363a9f285ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c147391c-249b-a558-951f-9ee9ae8cd0cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c147391c-249b-a558-951f-9ee9ae8cd0cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3a5046e0-f48d-6554-a6ce-dac9e37af24b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3a5046e0-f48d-6554-a6ce-dac9e37af24b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "821ed8ce-91c9-b05d-bbe2-73df4d592fa5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "821ed8ce-91c9-b05d-bbe2-73df4d592fa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a1c134ee-f376-515f-a222-65ad4dcd909f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a1c134ee-f376-515f-a222-65ad4dcd909f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "0c9d8f0d-63f4-b755-bc21-ed267c6691ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0c9d8f0d-63f4-b755-bc21-ed267c6691ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e75a6769-df11-1952-86a2-e69afa97262a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e75a6769-df11-1952-86a2-e69afa97262a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d4fff39c-e19c-8651-b366-c51ba2600189"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d4fff39c-e19c-8651-b366-c51ba2600189",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5268ce12-54ce-1454-b75c-55afe3358324"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5268ce12-54ce-1454-b75c-55afe3358324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "25e94ee4-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "25e94ee4-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25e94ee4-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "25e0b73f-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "25e1efa7-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "25ea7bd1-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "25ea7bd1-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25ea7bd1-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "25e32828-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "25e46093-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "25ebb40e-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "25ebb40e-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25ebb40e-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "25e5992b-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "25e6d1ed-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "25f57952-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "25f57952-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:17:26.456 07:30:19 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:17:26.456 07:30:19 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:17:26.456 07:30:19 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:17:26.456 07:30:19 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 48527 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 48527 ']' 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 48527 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@954 -- # ps -c -o command 48527 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@954 -- # tail -1 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48527' 00:17:26.456 killing process with pid 48527 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@965 -- # kill 48527 00:17:26.456 07:30:19 blockdev_general -- common/autotest_common.sh@970 -- # wait 48527 00:17:26.721 07:30:20 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:26.721 07:30:20 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:17:26.721 07:30:20 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:17:26.721 07:30:20 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.721 07:30:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:26.721 ************************************ 00:17:26.721 START TEST bdev_hello_world 00:17:26.721 ************************************ 00:17:26.721 07:30:20 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:17:26.721 [2024-05-16 07:30:20.260450] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:26.721 [2024-05-16 07:30:20.260618] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:27.286 EAL: TSC is not safe to use in SMP mode 00:17:27.286 EAL: TSC is not invariant 00:17:27.286 [2024-05-16 07:30:20.752402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.286 [2024-05-16 07:30:20.836696] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:27.286 [2024-05-16 07:30:20.838941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.545 [2024-05-16 07:30:20.896047] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:27.545 [2024-05-16 07:30:20.896105] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:27.545 [2024-05-16 07:30:20.904016] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:27.545 [2024-05-16 07:30:20.904038] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:27.545 [2024-05-16 07:30:20.912037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:27.545 [2024-05-16 07:30:20.912064] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:27.545 [2024-05-16 07:30:20.912071] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:27.545 [2024-05-16 07:30:20.960037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:27.545 [2024-05-16 07:30:20.960094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.545 [2024-05-16 07:30:20.960107] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d406800 00:17:27.545 [2024-05-16 07:30:20.960115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.545 [2024-05-16 07:30:20.960429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.545 [2024-05-16 07:30:20.960450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:27.545 [2024-05-16 07:30:21.060177] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:27.545 [2024-05-16 07:30:21.060239] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:17:27.545 [2024-05-16 07:30:21.060260] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:27.545 [2024-05-16 07:30:21.060300] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:27.545 [2024-05-16 07:30:21.060324] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:27.545 [2024-05-16 07:30:21.060342] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:27.545 [2024-05-16 07:30:21.060363] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:27.545 00:17:27.545 [2024-05-16 07:30:21.060382] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:27.804 00:17:27.804 real 0m1.030s 00:17:27.804 user 0m0.496s 00:17:27.804 sys 0m0.533s 00:17:27.804 07:30:21 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:27.804 07:30:21 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:27.804 ************************************ 00:17:27.804 END TEST bdev_hello_world 00:17:27.804 ************************************ 00:17:27.804 07:30:21 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:17:27.804 07:30:21 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:27.804 07:30:21 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:27.804 07:30:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:27.804 ************************************ 00:17:27.804 START TEST bdev_bounds 00:17:27.804 ************************************ 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=48579 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:27.804 Process bdevio pid: 48579 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 48579' 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 48579 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 48579 ']' 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.804 07:30:21 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:27.805 [2024-05-16 07:30:21.332217] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:27.805 [2024-05-16 07:30:21.332424] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:28.371 EAL: TSC is not safe to use in SMP mode 00:17:28.371 EAL: TSC is not invariant 00:17:28.371 [2024-05-16 07:30:21.785121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.371 [2024-05-16 07:30:21.885257] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:28.371 [2024-05-16 07:30:21.885334] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:28.371 [2024-05-16 07:30:21.885347] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:17:28.371 [2024-05-16 07:30:21.889498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.371 [2024-05-16 07:30:21.889672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.371 [2024-05-16 07:30:21.889659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.628 [2024-05-16 07:30:21.949048] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:28.629 [2024-05-16 07:30:21.949128] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:28.629 [2024-05-16 07:30:21.957028] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:28.629 [2024-05-16 07:30:21.957060] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:28.629 [2024-05-16 07:30:21.965048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:28.629 [2024-05-16 07:30:21.965077] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:28.629 [2024-05-16 07:30:21.965088] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:28.629 [2024-05-16 07:30:22.013061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:28.629 [2024-05-16 07:30:22.013131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.629 [2024-05-16 07:30:22.013149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d6aa800 00:17:28.629 [2024-05-16 07:30:22.013160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.629 [2024-05-16 07:30:22.013610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.629 [2024-05-16 07:30:22.013641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:28.886 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:28.886 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:17:28.886 07:30:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:28.886 I/O targets: 00:17:28.886 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:17:28.886 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:17:28.886 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:17:28.886 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:17:28.886 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:17:28.886 raid0: 131072 blocks of 512 bytes (64 MiB) 00:17:28.886 concat0: 131072 blocks of 512 bytes (64 MiB) 00:17:28.886 raid1: 65536 blocks of 512 bytes (32 MiB) 00:17:28.886 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:17:28.886 00:17:28.886 00:17:28.886 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.886 http://cunit.sourceforge.net/ 00:17:28.886 00:17:28.886 00:17:28.886 Suite: bdevio tests on: AIO0 00:17:28.886 Test: blockdev write read block ...passed 00:17:28.886 Test: blockdev write zeroes read block ...passed 00:17:28.886 Test: blockdev write zeroes read no split ...passed 00:17:28.886 Test: blockdev write zeroes read split ...passed 00:17:29.145 Test: blockdev write zeroes read split partial ...passed 00:17:29.145 Test: blockdev reset ...passed 00:17:29.145 Test: blockdev write read 8 blocks ...passed 00:17:29.145 Test: blockdev write read size > 128k ...passed 00:17:29.145 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: raid1 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: concat0 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: raid0 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: TestPT 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: Malloc2p7 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: Malloc2p6 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.146 Test: blockdev copy ...passed 00:17:29.146 Suite: bdevio tests on: Malloc2p5 00:17:29.146 Test: blockdev write read block ...passed 00:17:29.146 Test: blockdev write zeroes read block ...passed 00:17:29.146 Test: blockdev write zeroes read no split ...passed 00:17:29.146 Test: blockdev write zeroes read split ...passed 00:17:29.146 Test: blockdev write zeroes read split partial ...passed 00:17:29.146 Test: blockdev reset ...passed 00:17:29.146 Test: blockdev write read 8 blocks ...passed 00:17:29.146 Test: blockdev write read size > 128k ...passed 00:17:29.146 Test: blockdev write read invalid size ...passed 00:17:29.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.146 Test: blockdev write read max offset ...passed 00:17:29.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.146 Test: blockdev writev readv 8 blocks ...passed 00:17:29.146 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.146 Test: blockdev writev readv block ...passed 00:17:29.146 Test: blockdev writev readv size > 128k ...passed 00:17:29.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.146 Test: blockdev comparev and writev ...passed 00:17:29.146 Test: blockdev nvme passthru rw ...passed 00:17:29.146 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.146 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc2p4 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc2p3 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc2p2 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc2p1 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc2p0 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc1p1 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc1p0 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.147 Test: blockdev writev readv 8 blocks ...passed 00:17:29.147 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.147 Test: blockdev writev readv block ...passed 00:17:29.147 Test: blockdev writev readv size > 128k ...passed 00:17:29.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.147 Test: blockdev comparev and writev ...passed 00:17:29.147 Test: blockdev nvme passthru rw ...passed 00:17:29.147 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.147 Test: blockdev nvme admin passthru ...passed 00:17:29.147 Test: blockdev copy ...passed 00:17:29.147 Suite: bdevio tests on: Malloc0 00:17:29.147 Test: blockdev write read block ...passed 00:17:29.147 Test: blockdev write zeroes read block ...passed 00:17:29.147 Test: blockdev write zeroes read no split ...passed 00:17:29.147 Test: blockdev write zeroes read split ...passed 00:17:29.147 Test: blockdev write zeroes read split partial ...passed 00:17:29.147 Test: blockdev reset ...passed 00:17:29.147 Test: blockdev write read 8 blocks ...passed 00:17:29.147 Test: blockdev write read size > 128k ...passed 00:17:29.147 Test: blockdev write read invalid size ...passed 00:17:29.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.147 Test: blockdev write read max offset ...passed 00:17:29.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.148 Test: blockdev writev readv 8 blocks ...passed 00:17:29.148 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.148 Test: blockdev writev readv block ...passed 00:17:29.148 Test: blockdev writev readv size > 128k ...passed 00:17:29.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.148 Test: blockdev comparev and writev ...passed 00:17:29.148 Test: blockdev nvme passthru rw ...passed 00:17:29.148 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.148 Test: blockdev nvme admin passthru ...passed 00:17:29.148 Test: blockdev copy ...passed 00:17:29.148 00:17:29.148 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.148 suites 16 16 n/a 0 0 00:17:29.148 tests 368 368 368 0 0 00:17:29.148 asserts 2224 2224 2224 0 n/a 00:17:29.148 00:17:29.148 Elapsed time = 0.484 seconds 00:17:29.148 0 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 48579 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 48579 ']' 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 48579 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps -c -o command 48579 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # tail -1 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=bdevio 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' bdevio = sudo ']' 00:17:29.148 killing process with pid 48579 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48579' 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 48579 00:17:29.148 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 48579 00:17:29.406 07:30:22 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:17:29.406 00:17:29.406 real 0m1.561s 00:17:29.406 user 0m3.124s 00:17:29.406 sys 0m0.608s 00:17:29.406 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.406 07:30:22 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:29.406 ************************************ 00:17:29.406 END TEST bdev_bounds 00:17:29.406 ************************************ 00:17:29.406 07:30:22 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:17:29.406 07:30:22 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:29.406 07:30:22 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:29.406 07:30:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:29.406 ************************************ 00:17:29.406 START TEST bdev_nbd 00:17:29.406 ************************************ 00:17:29.406 07:30:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:17:29.406 07:30:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:17:29.406 07:30:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:17:29.406 07:30:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:17:29.406 00:17:29.406 real 0m0.005s 00:17:29.406 user 0m0.010s 00:17:29.406 sys 0m0.001s 00:17:29.406 07:30:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.406 ************************************ 00:17:29.406 END TEST bdev_nbd 00:17:29.406 ************************************ 00:17:29.406 07:30:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:29.406 07:30:22 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:17:29.406 07:30:22 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:17:29.406 07:30:22 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:17:29.406 07:30:22 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:17:29.406 07:30:22 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:29.406 07:30:22 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:29.406 07:30:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:29.665 ************************************ 00:17:29.665 START TEST bdev_fio 00:17:29.665 ************************************ 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:17:29.665 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:17:29.665 07:30:22 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.234 07:30:23 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:30.234 ************************************ 00:17:30.234 START TEST bdev_fio_rw_verify 00:17:30.234 ************************************ 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.234 07:30:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:30.234 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:30.234 fio-3.35 00:17:30.492 Starting 16 threads 00:17:30.750 EAL: TSC is not safe to use in SMP mode 00:17:30.750 EAL: TSC is not invariant 00:17:42.981 00:17:42.981 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102705: Thu May 16 07:30:34 2024 00:17:42.981 read: IOPS=232k, BW=904MiB/s (948MB/s)(9045MiB/10001msec) 00:17:42.981 slat (nsec): min=256, max=1338.6M, avg=4004.01, stdev=961481.03 00:17:42.981 clat (nsec): min=665, max=282703k, avg=45106.21, stdev=1387875.03 00:17:42.981 lat (nsec): min=1661, max=1338.7M, avg=49110.22, stdev=1688399.85 00:17:42.981 clat percentiles (usec): 00:17:42.981 | 50.000th=[ 10], 99.000th=[ 734], 99.900th=[ 848], 00:17:42.981 | 99.990th=[ 94897], 99.999th=[123208] 00:17:42.981 write: IOPS=385k, BW=1502MiB/s (1575MB/s)(14.5GiB/9886msec); 0 zone resets 00:17:42.981 slat (nsec): min=592, max=1177.4M, avg=22347.37, stdev=1083937.50 00:17:42.981 clat (nsec): min=634, max=2040.0M, avg=107068.14, stdev=3468312.47 00:17:42.981 lat (usec): min=11, max=2040.0k, avg=129.42, stdev=3711.04 00:17:42.981 clat percentiles (usec): 00:17:42.981 | 50.000th=[ 50], 99.000th=[ 709], 99.900th=[ 2311], 00:17:42.981 | 99.990th=[ 94897], 99.999th=[173016] 00:17:42.981 bw ( MiB/s): min= 683, max= 2539, per=99.27%, avg=1491.50, stdev=40.12, samples=297 00:17:42.981 iops : min=175073, max=650216, avg=381819.57, stdev=10271.89, samples=297 00:17:42.981 lat (nsec) : 750=0.01%, 1000=0.01% 00:17:42.981 lat (usec) : 2=0.06%, 4=10.93%, 10=18.03%, 20=20.90%, 50=18.91% 00:17:42.981 lat (usec) : 100=26.09%, 250=3.39%, 500=0.20%, 750=0.75%, 1000=0.61% 00:17:42.981 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.02% 00:17:42.981 lat (msec) : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:17:42.981 lat (msec) : >=2000=0.01% 00:17:42.981 cpu : usr=56.33%, sys=2.87%, ctx=664544, majf=0, minf=621 00:17:42.981 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:42.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.981 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.981 issued rwts: total=2315441,3802478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:42.981 00:17:42.981 Run status group 0 (all jobs): 00:17:42.981 READ: bw=904MiB/s (948MB/s), 904MiB/s-904MiB/s (948MB/s-948MB/s), io=9045MiB (9484MB), run=10001-10001msec 00:17:42.981 WRITE: bw=1502MiB/s (1575MB/s), 1502MiB/s-1502MiB/s (1575MB/s-1575MB/s), io=14.5GiB (15.6GB), run=9886-9886msec 00:17:42.981 00:17:42.981 real 0m12.489s 00:17:42.981 user 1m34.446s 00:17:42.981 sys 0m7.176s 00:17:42.981 07:30:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:42.981 07:30:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:42.981 ************************************ 00:17:42.981 END TEST bdev_fio_rw_verify 00:17:42.981 ************************************ 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:17:42.981 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:17:42.982 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:42.982 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:17:42.982 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:17:42.982 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:17:42.982 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:17:42.982 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:42.983 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "25dbd62b-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "25dbd62b-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "54326866-d4eb-2250-b65f-69b9cfa70e0e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "54326866-d4eb-2250-b65f-69b9cfa70e0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "fc303ed6-1f43-f057-89b1-4ad1e1833770"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fc303ed6-1f43-f057-89b1-4ad1e1833770",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "18976896-5a45-f856-b8c5-5363a9f285ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "18976896-5a45-f856-b8c5-5363a9f285ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c147391c-249b-a558-951f-9ee9ae8cd0cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c147391c-249b-a558-951f-9ee9ae8cd0cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3a5046e0-f48d-6554-a6ce-dac9e37af24b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3a5046e0-f48d-6554-a6ce-dac9e37af24b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "821ed8ce-91c9-b05d-bbe2-73df4d592fa5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "821ed8ce-91c9-b05d-bbe2-73df4d592fa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a1c134ee-f376-515f-a222-65ad4dcd909f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a1c134ee-f376-515f-a222-65ad4dcd909f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "0c9d8f0d-63f4-b755-bc21-ed267c6691ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0c9d8f0d-63f4-b755-bc21-ed267c6691ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e75a6769-df11-1952-86a2-e69afa97262a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e75a6769-df11-1952-86a2-e69afa97262a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d4fff39c-e19c-8651-b366-c51ba2600189"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d4fff39c-e19c-8651-b366-c51ba2600189",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5268ce12-54ce-1454-b75c-55afe3358324"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5268ce12-54ce-1454-b75c-55afe3358324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "25e94ee4-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "25e94ee4-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25e94ee4-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "25e0b73f-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "25e1efa7-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "25ea7bd1-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "25ea7bd1-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25ea7bd1-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "25e32828-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "25e46093-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "25ebb40e-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "25ebb40e-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25ebb40e-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "25e5992b-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "25e6d1ed-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "25f57952-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "25f57952-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:17:42.983 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:17:42.983 Malloc1p0 00:17:42.983 Malloc1p1 00:17:42.983 Malloc2p0 00:17:42.983 Malloc2p1 00:17:42.983 Malloc2p2 00:17:42.983 Malloc2p3 00:17:42.983 Malloc2p4 00:17:42.983 Malloc2p5 00:17:42.983 Malloc2p6 00:17:42.983 Malloc2p7 00:17:42.983 TestPT 00:17:42.983 raid0 00:17:42.983 concat0 ]] 00:17:42.983 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "25dbd62b-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "25dbd62b-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "54326866-d4eb-2250-b65f-69b9cfa70e0e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "54326866-d4eb-2250-b65f-69b9cfa70e0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "fc303ed6-1f43-f057-89b1-4ad1e1833770"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fc303ed6-1f43-f057-89b1-4ad1e1833770",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "18976896-5a45-f856-b8c5-5363a9f285ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "18976896-5a45-f856-b8c5-5363a9f285ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "c147391c-249b-a558-951f-9ee9ae8cd0cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c147391c-249b-a558-951f-9ee9ae8cd0cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3a5046e0-f48d-6554-a6ce-dac9e37af24b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3a5046e0-f48d-6554-a6ce-dac9e37af24b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "821ed8ce-91c9-b05d-bbe2-73df4d592fa5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "821ed8ce-91c9-b05d-bbe2-73df4d592fa5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a1c134ee-f376-515f-a222-65ad4dcd909f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a1c134ee-f376-515f-a222-65ad4dcd909f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "0c9d8f0d-63f4-b755-bc21-ed267c6691ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0c9d8f0d-63f4-b755-bc21-ed267c6691ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e75a6769-df11-1952-86a2-e69afa97262a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e75a6769-df11-1952-86a2-e69afa97262a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d4fff39c-e19c-8651-b366-c51ba2600189"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d4fff39c-e19c-8651-b366-c51ba2600189",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5268ce12-54ce-1454-b75c-55afe3358324"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5268ce12-54ce-1454-b75c-55afe3358324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "25e94ee4-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "25e94ee4-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25e94ee4-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "25e0b73f-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "25e1efa7-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "25ea7bd1-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "25ea7bd1-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25ea7bd1-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "25e32828-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "25e46093-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "25ebb40e-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "25ebb40e-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "25ebb40e-1356-11ef-8e8f-9dd684e56d79",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "25e5992b-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "25e6d1ed-1356-11ef-8e8f-9dd684e56d79",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "25f57952-1356-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "25f57952-1356-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:42.984 07:30:36 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:42.984 ************************************ 00:17:42.984 START TEST bdev_fio_trim 00:17:42.984 ************************************ 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:42.984 07:30:36 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:17:42.984 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.984 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:42.985 fio-3.35 00:17:42.985 Starting 14 threads 00:17:43.553 EAL: TSC is not safe to use in SMP mode 00:17:43.553 EAL: TSC is not invariant 00:17:55.860 00:17:55.860 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102724: Thu May 16 07:30:48 2024 00:17:55.860 write: IOPS=2010k, BW=7853MiB/s (8235MB/s)(76.7GiB/10002msec); 0 zone resets 00:17:55.860 slat (nsec): min=248, max=2416.0M, avg=1771.54, stdev=806964.08 00:17:55.860 clat (nsec): min=1349, max=2416.0M, avg=19523.33, stdev=1517820.27 00:17:55.860 lat (nsec): min=1941, max=2416.0M, avg=21294.87, stdev=1719000.13 00:17:55.860 clat percentiles (usec): 00:17:55.860 | 50.000th=[ 8], 99.000th=[ 25], 99.900th=[ 955], 99.990th=[10290], 00:17:55.860 | 99.999th=[94897] 00:17:55.860 bw ( MiB/s): min= 2533, max=13506, per=100.00%, avg=8224.70, stdev=249.22, samples=252 00:17:55.860 iops : min=648685, max=3457748, avg=2105518.67, stdev=63800.02, samples=252 00:17:55.860 trim: IOPS=2010k, BW=7853MiB/s (8235MB/s)(76.7GiB/10002msec); 0 zone resets 00:17:55.860 slat (nsec): min=487, max=1787.4M, avg=1913.56, stdev=534093.02 00:17:55.860 clat (nsec): min=359, max=2416.0M, avg=14212.82, stdev=1649648.07 00:17:55.860 lat (nsec): min=1560, max=2416.0M, avg=16126.39, stdev=1733956.00 00:17:55.860 clat percentiles (usec): 00:17:55.860 | 50.000th=[ 9], 99.000th=[ 25], 99.900th=[ 33], 99.990th=[ 64], 00:17:55.860 | 99.999th=[94897] 00:17:55.860 bw ( MiB/s): min= 2533, max=13506, per=100.00%, avg=8224.71, stdev=249.22, samples=252 00:17:55.860 iops : min=648685, max=3457749, avg=2105520.35, stdev=63800.04, samples=252 00:17:55.860 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:17:55.860 lat (usec) : 2=0.10%, 4=15.20%, 10=53.56%, 20=28.04%, 50=2.82% 00:17:55.860 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.24% 00:17:55.860 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:17:55.860 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:17:55.860 lat (msec) : >=2000=0.01% 00:17:55.860 cpu : usr=63.41%, sys=4.56%, ctx=963087, majf=0, minf=0 00:17:55.860 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:55.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.860 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.860 issued rwts: total=0,20108671,20108677,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.860 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:55.860 00:17:55.860 Run status group 0 (all jobs): 00:17:55.860 WRITE: bw=7853MiB/s (8235MB/s), 7853MiB/s-7853MiB/s (8235MB/s-8235MB/s), io=76.7GiB (82.4GB), run=10002-10002msec 00:17:55.860 TRIM: bw=7853MiB/s (8235MB/s), 7853MiB/s-7853MiB/s (8235MB/s-8235MB/s), io=76.7GiB (82.4GB), run=10002-10002msec 00:17:55.860 00:17:55.860 real 0m12.853s 00:17:55.860 user 1m34.991s 00:17:55.860 sys 0m9.806s 00:17:55.860 ************************************ 00:17:55.860 END TEST bdev_fio_trim 00:17:55.860 ************************************ 00:17:55.860 07:30:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.860 07:30:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:17:55.860 07:30:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:17:55.860 07:30:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:55.860 /usr/home/vagrant/spdk_repo/spdk 00:17:55.860 07:30:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:17:55.860 07:30:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:17:55.860 00:17:55.860 real 0m26.217s 00:17:55.860 user 3m9.701s 00:17:55.860 sys 0m17.555s 00:17:55.860 07:30:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.860 07:30:49 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:55.860 ************************************ 00:17:55.860 END TEST bdev_fio 00:17:55.860 ************************************ 00:17:55.860 07:30:49 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:55.860 07:30:49 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:55.860 07:30:49 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:17:55.860 07:30:49 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.860 07:30:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:17:55.860 ************************************ 00:17:55.860 START TEST bdev_verify 00:17:55.860 ************************************ 00:17:55.860 07:30:49 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:55.860 [2024-05-16 07:30:49.246042] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:55.860 [2024-05-16 07:30:49.246400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:56.426 EAL: TSC is not safe to use in SMP mode 00:17:56.426 EAL: TSC is not invariant 00:17:56.426 [2024-05-16 07:30:49.717268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:56.426 [2024-05-16 07:30:49.799587] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:56.426 [2024-05-16 07:30:49.799659] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:17:56.426 [2024-05-16 07:30:49.802444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.426 [2024-05-16 07:30:49.802449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.426 [2024-05-16 07:30:49.859341] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:56.426 [2024-05-16 07:30:49.859404] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:17:56.426 [2024-05-16 07:30:49.867298] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:56.426 [2024-05-16 07:30:49.867342] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:17:56.426 [2024-05-16 07:30:49.875328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:56.426 [2024-05-16 07:30:49.875372] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:17:56.426 [2024-05-16 07:30:49.875382] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:17:56.426 [2024-05-16 07:30:49.923318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:56.426 [2024-05-16 07:30:49.923385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.426 [2024-05-16 07:30:49.923401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d4a8800 00:17:56.426 [2024-05-16 07:30:49.923409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.426 [2024-05-16 07:30:49.923755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.426 [2024-05-16 07:30:49.923799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:17:56.683 Running I/O for 5 seconds... 00:18:02.031 00:18:02.031 Latency(us) 00:18:02.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.032 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x1000 00:18:02.032 Malloc0 : 5.02 6683.02 26.11 0.00 0.00 19145.95 60.95 48184.42 00:18:02.032 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x1000 length 0x1000 00:18:02.032 Malloc0 : 5.03 184.69 0.72 0.00 0.00 692239.99 139.46 1949346.98 00:18:02.032 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x800 00:18:02.032 Malloc1p0 : 5.01 5974.61 23.34 0.00 0.00 21411.76 255.51 21595.61 00:18:02.032 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x800 length 0x800 00:18:02.032 Malloc1p0 : 5.03 6215.02 24.28 0.00 0.00 20582.71 241.86 22594.25 00:18:02.032 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x800 00:18:02.032 Malloc1p1 : 5.01 5974.22 23.34 0.00 0.00 21409.38 234.06 20971.46 00:18:02.032 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x800 length 0x800 00:18:02.032 Malloc1p1 : 5.03 6214.49 24.28 0.00 0.00 20581.00 228.21 21970.10 00:18:02.032 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p0 : 5.01 5973.83 23.34 0.00 0.00 21407.31 230.16 20222.48 00:18:02.032 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p0 : 5.03 6214.05 24.27 0.00 0.00 20579.52 234.06 21345.95 00:18:02.032 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p1 : 5.01 5973.47 23.33 0.00 0.00 21405.45 229.18 19473.50 00:18:02.032 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p1 : 5.03 6213.59 24.27 0.00 0.00 20577.73 224.30 20846.63 00:18:02.032 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p2 : 5.01 5973.02 23.33 0.00 0.00 21403.58 228.21 18849.35 00:18:02.032 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p2 : 5.03 6212.94 24.27 0.00 0.00 20576.09 235.03 20347.31 00:18:02.032 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p3 : 5.01 5972.65 23.33 0.00 0.00 21401.43 237.96 18225.20 00:18:02.032 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p3 : 5.03 6212.58 24.27 0.00 0.00 20574.75 234.06 19723.16 00:18:02.032 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p4 : 5.02 5972.28 23.33 0.00 0.00 21399.37 233.08 17101.72 00:18:02.032 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p4 : 5.03 6212.15 24.27 0.00 0.00 20572.83 228.21 19223.84 00:18:02.032 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p5 : 5.02 5971.90 23.33 0.00 0.00 21397.36 233.08 17226.56 00:18:02.032 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p5 : 5.03 6211.74 24.26 0.00 0.00 20570.60 228.21 19723.16 00:18:02.032 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p6 : 5.02 5971.55 23.33 0.00 0.00 21395.37 223.33 17850.71 00:18:02.032 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p6 : 5.03 6211.38 24.26 0.00 0.00 20569.00 233.08 20721.80 00:18:02.032 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x200 00:18:02.032 Malloc2p7 : 5.02 5971.17 23.32 0.00 0.00 21393.24 227.23 18724.52 00:18:02.032 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x200 length 0x200 00:18:02.032 Malloc2p7 : 5.03 6210.84 24.26 0.00 0.00 20567.76 229.18 21595.61 00:18:02.032 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x1000 00:18:02.032 TestPT : 5.02 5943.66 23.22 0.00 0.00 21486.97 951.83 20846.63 00:18:02.032 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x1000 length 0x1000 00:18:02.032 TestPT : 5.03 4873.91 19.04 0.00 0.00 26191.45 936.23 82387.87 00:18:02.032 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x2000 00:18:02.032 raid0 : 5.02 5970.27 23.32 0.00 0.00 21387.84 265.26 18599.69 00:18:02.032 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x2000 length 0x2000 00:18:02.032 raid0 : 5.03 6210.21 24.26 0.00 0.00 20561.38 263.31 21845.27 00:18:02.032 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x2000 00:18:02.032 concat0 : 5.02 5969.85 23.32 0.00 0.00 21385.53 244.78 19473.50 00:18:02.032 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x2000 length 0x2000 00:18:02.032 concat0 : 5.03 6209.80 24.26 0.00 0.00 20558.81 243.81 22719.08 00:18:02.032 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x1000 00:18:02.032 raid1 : 5.02 5969.50 23.32 0.00 0.00 21382.89 290.62 20097.65 00:18:02.032 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x1000 length 0x1000 00:18:02.032 raid1 : 5.03 6209.40 24.26 0.00 0.00 20556.71 292.57 23592.89 00:18:02.032 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x0 length 0x4e2 00:18:02.032 AIO0 : 5.08 923.85 3.61 0.00 0.00 137737.16 12919.92 187744.48 00:18:02.032 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.032 Verification LBA range: start 0x4e2 length 0x4e2 00:18:02.032 AIO0 : 5.08 933.62 3.65 0.00 0.00 136410.65 13668.90 197730.89 00:18:02.032 =================================================================================================================== 00:18:02.032 Total : 177939.25 695.08 0.00 0.00 22991.89 60.95 1949346.98 00:18:02.032 00:18:02.032 real 0m6.121s 00:18:02.032 user 0m9.827s 00:18:02.032 sys 0m0.574s 00:18:02.032 07:30:55 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:02.032 ************************************ 00:18:02.032 END TEST bdev_verify 00:18:02.032 ************************************ 00:18:02.032 07:30:55 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:02.032 07:30:55 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:02.032 07:30:55 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:18:02.032 07:30:55 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:02.032 07:30:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:02.032 ************************************ 00:18:02.032 START TEST bdev_verify_big_io 00:18:02.032 ************************************ 00:18:02.032 07:30:55 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:02.032 [2024-05-16 07:30:55.410716] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:02.032 [2024-05-16 07:30:55.410890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:02.599 EAL: TSC is not safe to use in SMP mode 00:18:02.599 EAL: TSC is not invariant 00:18:02.599 [2024-05-16 07:30:55.905992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:02.599 [2024-05-16 07:30:55.988474] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:02.599 [2024-05-16 07:30:55.988538] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:02.599 [2024-05-16 07:30:55.991333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.599 [2024-05-16 07:30:55.991328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.599 [2024-05-16 07:30:56.048315] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:18:02.599 [2024-05-16 07:30:56.048367] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:18:02.599 [2024-05-16 07:30:56.056305] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:18:02.599 [2024-05-16 07:30:56.056331] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:18:02.599 [2024-05-16 07:30:56.064319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:02.599 [2024-05-16 07:30:56.064345] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:18:02.599 [2024-05-16 07:30:56.064353] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:18:02.599 [2024-05-16 07:30:56.112338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:02.599 [2024-05-16 07:30:56.112389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.599 [2024-05-16 07:30:56.112406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba89800 00:18:02.599 [2024-05-16 07:30:56.112414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.599 [2024-05-16 07:30:56.112754] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.599 [2024-05-16 07:30:56.112781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:18:02.858 [2024-05-16 07:30:56.213111] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213226] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213289] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213354] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213434] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213507] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213606] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213674] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213736] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213799] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213895] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.213982] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.214071] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.214164] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.214282] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.214376] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:18:02.858 [2024-05-16 07:30:56.215339] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:18:02.858 [2024-05-16 07:30:56.215468] bdevperf.c:1834:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:18:02.858 Running I/O for 5 seconds... 00:18:08.135 00:18:08.135 Latency(us) 00:18:08.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.135 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x100 00:18:08.135 Malloc0 : 5.05 4028.97 251.81 0.00 0.00 31685.49 69.24 127326.71 00:18:08.135 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x100 length 0x100 00:18:08.135 Malloc0 : 5.03 4604.80 287.80 0.00 0.00 27709.48 66.80 129823.31 00:18:08.135 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x80 00:18:08.135 Malloc1p0 : 5.07 1919.10 119.94 0.00 0.00 66346.91 893.32 122832.83 00:18:08.135 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x80 length 0x80 00:18:08.135 Malloc1p0 : 5.10 592.70 37.04 0.00 0.00 214019.95 388.14 285611.29 00:18:08.135 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x80 00:18:08.135 Malloc1p1 : 5.08 528.92 33.06 0.00 0.00 240496.44 333.53 279619.44 00:18:08.135 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x80 length 0x80 00:18:08.135 Malloc1p1 : 5.11 595.48 37.22 0.00 0.00 212838.51 388.14 277622.16 00:18:08.135 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x20 00:18:08.135 Malloc2p0 : 5.06 511.96 32.00 0.00 0.00 62082.27 233.08 86382.44 00:18:08.135 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x20 length 0x20 00:18:08.135 Malloc2p0 : 5.06 581.49 36.34 0.00 0.00 54547.97 255.51 109850.50 00:18:08.135 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x20 00:18:08.135 Malloc2p1 : 5.06 511.94 32.00 0.00 0.00 62059.65 225.28 85383.79 00:18:08.135 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x20 length 0x20 00:18:08.135 Malloc2p1 : 5.06 581.46 36.34 0.00 0.00 54512.22 257.46 108352.53 00:18:08.135 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x20 00:18:08.135 Malloc2p2 : 5.06 511.91 31.99 0.00 0.00 62030.55 232.11 84385.15 00:18:08.135 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x20 length 0x20 00:18:08.135 Malloc2p2 : 5.07 583.64 36.48 0.00 0.00 54314.90 255.51 106854.57 00:18:08.135 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x20 00:18:08.135 Malloc2p3 : 5.06 511.88 31.99 0.00 0.00 62014.37 229.18 83885.83 00:18:08.135 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x20 length 0x20 00:18:08.135 Malloc2p3 : 5.07 583.61 36.48 0.00 0.00 54283.67 255.51 104857.29 00:18:08.135 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x0 length 0x20 00:18:08.135 Malloc2p4 : 5.06 511.85 31.99 0.00 0.00 61987.97 235.03 82887.19 00:18:08.135 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.135 Verification LBA range: start 0x20 length 0x20 00:18:08.136 Malloc2p4 : 5.07 583.58 36.47 0.00 0.00 54256.22 249.66 103359.33 00:18:08.136 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x20 00:18:08.136 Malloc2p5 : 5.06 511.83 31.99 0.00 0.00 61965.16 241.86 81888.55 00:18:08.136 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x20 length 0x20 00:18:08.136 Malloc2p5 : 5.07 583.55 36.47 0.00 0.00 54228.55 255.51 101861.37 00:18:08.136 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x20 00:18:08.136 Malloc2p6 : 5.06 511.80 31.99 0.00 0.00 61944.50 235.03 80889.91 00:18:08.136 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x20 length 0x20 00:18:08.136 Malloc2p6 : 5.07 583.52 36.47 0.00 0.00 54199.71 257.46 100363.41 00:18:08.136 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x20 00:18:08.136 Malloc2p7 : 5.06 511.77 31.99 0.00 0.00 61919.04 233.08 79891.27 00:18:08.136 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x20 length 0x20 00:18:08.136 Malloc2p7 : 5.07 583.49 36.47 0.00 0.00 54178.05 257.46 98865.45 00:18:08.136 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x100 00:18:08.136 TestPT : 5.11 519.31 32.46 0.00 0.00 242737.29 4400.26 249660.22 00:18:08.136 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x100 length 0x100 00:18:08.136 TestPT : 5.20 184.65 11.54 0.00 0.00 680109.57 19972.82 695054.05 00:18:08.136 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x200 00:18:08.136 raid0 : 5.08 531.89 33.24 0.00 0.00 237766.02 360.84 259646.63 00:18:08.136 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x200 length 0x200 00:18:08.136 raid0 : 5.09 603.86 37.74 0.00 0.00 208816.82 399.85 245665.65 00:18:08.136 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x200 00:18:08.136 concat0 : 5.08 531.87 33.24 0.00 0.00 237377.43 356.94 252656.14 00:18:08.136 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x200 length 0x200 00:18:08.136 concat0 : 5.11 604.83 37.80 0.00 0.00 207873.68 411.55 235679.25 00:18:08.136 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x100 00:18:08.136 raid1 : 5.08 534.95 33.43 0.00 0.00 235658.75 415.45 243668.37 00:18:08.136 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x100 length 0x100 00:18:08.136 raid1 : 5.10 623.04 38.94 0.00 0.00 201615.61 485.67 224694.20 00:18:08.136 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x0 length 0x4e 00:18:08.136 AIO0 : 5.08 534.68 33.42 0.00 0.00 143557.16 468.11 145801.57 00:18:08.136 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:18:08.136 Verification LBA range: start 0x4e length 0x4e 00:18:08.136 AIO0 : 5.11 615.74 38.48 0.00 0.00 124095.11 799.69 136813.80 00:18:08.136 =================================================================================================================== 00:18:08.136 Total : 26314.07 1644.63 0.00 0.00 92791.42 66.80 695054.05 00:18:08.136 00:18:08.136 real 0m6.286s 00:18:08.136 user 0m11.190s 00:18:08.136 sys 0m0.664s 00:18:08.136 ************************************ 00:18:08.136 END TEST bdev_verify_big_io 00:18:08.136 ************************************ 00:18:08.136 07:31:01 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.136 07:31:01 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:08.393 07:31:01 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:08.393 07:31:01 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:18:08.393 07:31:01 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.393 07:31:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:08.393 ************************************ 00:18:08.393 START TEST bdev_write_zeroes 00:18:08.393 ************************************ 00:18:08.393 07:31:01 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:08.393 [2024-05-16 07:31:01.736611] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:08.393 [2024-05-16 07:31:01.736777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:08.959 EAL: TSC is not safe to use in SMP mode 00:18:08.959 EAL: TSC is not invariant 00:18:08.959 [2024-05-16 07:31:02.225810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.959 [2024-05-16 07:31:02.308936] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:08.959 [2024-05-16 07:31:02.311191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.959 [2024-05-16 07:31:02.367716] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:18:08.959 [2024-05-16 07:31:02.367767] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:18:08.959 [2024-05-16 07:31:02.375704] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:18:08.959 [2024-05-16 07:31:02.375732] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:18:08.959 [2024-05-16 07:31:02.383720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:08.959 [2024-05-16 07:31:02.383746] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:18:08.959 [2024-05-16 07:31:02.383753] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:18:08.959 [2024-05-16 07:31:02.431725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:18:08.959 [2024-05-16 07:31:02.431774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.959 [2024-05-16 07:31:02.431788] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b53b800 00:18:08.959 [2024-05-16 07:31:02.431796] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.959 [2024-05-16 07:31:02.432122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.959 [2024-05-16 07:31:02.432144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:18:09.217 Running I/O for 1 seconds... 00:18:10.151 00:18:10.151 Latency(us) 00:18:10.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.151 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc0 : 1.01 35371.10 138.17 0.00 0.00 3618.45 152.14 6584.79 00:18:10.151 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc1p0 : 1.01 35367.41 138.15 0.00 0.00 3617.25 172.62 6428.75 00:18:10.151 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc1p1 : 1.01 35362.67 138.14 0.00 0.00 3616.64 172.62 6272.71 00:18:10.151 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p0 : 1.01 35359.68 138.12 0.00 0.00 3615.53 167.74 6147.88 00:18:10.151 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p1 : 1.01 35355.01 138.11 0.00 0.00 3614.68 169.69 6023.05 00:18:10.151 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p2 : 1.01 35352.13 138.09 0.00 0.00 3613.67 165.79 5898.22 00:18:10.151 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p3 : 1.01 35349.07 138.08 0.00 0.00 3612.86 161.89 5804.60 00:18:10.151 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p4 : 1.01 35345.07 138.07 0.00 0.00 3611.75 165.79 5679.77 00:18:10.151 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p5 : 1.01 35342.01 138.05 0.00 0.00 3611.08 162.86 5648.56 00:18:10.151 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p6 : 1.01 35338.89 138.04 0.00 0.00 3610.07 164.81 5492.52 00:18:10.151 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 Malloc2p7 : 1.01 35334.19 138.02 0.00 0.00 3609.23 166.77 5398.90 00:18:10.151 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 TestPT : 1.01 35331.38 138.01 0.00 0.00 3608.47 171.64 5367.69 00:18:10.151 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 raid0 : 1.01 35327.28 138.00 0.00 0.00 3607.27 222.35 5242.86 00:18:10.151 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 concat0 : 1.01 35322.23 137.98 0.00 0.00 3605.75 238.93 5211.66 00:18:10.151 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 raid1 : 1.01 35317.49 137.96 0.00 0.00 3604.26 421.30 4993.20 00:18:10.151 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:10.151 AIO0 : 1.04 3500.07 13.67 0.00 0.00 35740.58 351.08 162778.46 00:18:10.151 =================================================================================================================== 00:18:10.151 Total : 533675.66 2084.67 0.00 0.00 3830.25 152.14 162778.46 00:18:10.408 00:18:10.408 real 0m2.085s 00:18:10.408 user 0m1.409s 00:18:10.408 sys 0m0.543s 00:18:10.408 ************************************ 00:18:10.408 END TEST bdev_write_zeroes 00:18:10.408 ************************************ 00:18:10.408 07:31:03 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:10.408 07:31:03 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:10.408 07:31:03 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:10.408 07:31:03 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:18:10.408 07:31:03 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:10.408 07:31:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:10.408 ************************************ 00:18:10.408 START TEST bdev_json_nonenclosed 00:18:10.408 ************************************ 00:18:10.408 07:31:03 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:10.408 [2024-05-16 07:31:03.864107] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:10.408 [2024-05-16 07:31:03.864248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:10.974 EAL: TSC is not safe to use in SMP mode 00:18:10.974 EAL: TSC is not invariant 00:18:10.974 [2024-05-16 07:31:04.317483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.974 [2024-05-16 07:31:04.399048] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:10.974 [2024-05-16 07:31:04.401230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.974 [2024-05-16 07:31:04.401271] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:10.974 [2024-05-16 07:31:04.401281] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:10.974 [2024-05-16 07:31:04.401289] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:10.974 00:18:10.974 real 0m0.661s 00:18:10.974 user 0m0.153s 00:18:10.974 sys 0m0.507s 00:18:10.974 07:31:04 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:10.974 07:31:04 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:10.974 ************************************ 00:18:10.974 END TEST bdev_json_nonenclosed 00:18:10.974 ************************************ 00:18:11.231 07:31:04 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:11.231 07:31:04 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:18:11.231 07:31:04 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:11.231 07:31:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:11.231 ************************************ 00:18:11.231 START TEST bdev_json_nonarray 00:18:11.231 ************************************ 00:18:11.231 07:31:04 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:11.231 [2024-05-16 07:31:04.572523] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:11.231 [2024-05-16 07:31:04.572699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:11.489 EAL: TSC is not safe to use in SMP mode 00:18:11.489 EAL: TSC is not invariant 00:18:11.489 [2024-05-16 07:31:05.045167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.748 [2024-05-16 07:31:05.138145] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:11.748 [2024-05-16 07:31:05.140787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.748 [2024-05-16 07:31:05.140844] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:11.748 [2024-05-16 07:31:05.140857] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:11.748 [2024-05-16 07:31:05.140867] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:11.748 00:18:11.748 real 0m0.707s 00:18:11.748 user 0m0.186s 00:18:11.748 sys 0m0.518s 00:18:11.748 07:31:05 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:11.748 07:31:05 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:11.748 ************************************ 00:18:11.748 END TEST bdev_json_nonarray 00:18:11.748 ************************************ 00:18:11.748 07:31:05 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:18:11.748 07:31:05 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:18:11.748 07:31:05 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:11.748 07:31:05 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:11.748 07:31:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:12.006 ************************************ 00:18:12.006 START TEST bdev_qos 00:18:12.006 ************************************ 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48992 00:18:12.006 Process qos testing pid: 48992 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48992' 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48992 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 48992 ']' 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:12.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:12.006 07:31:05 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:12.006 [2024-05-16 07:31:05.321934] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:12.006 [2024-05-16 07:31:05.322088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:12.264 EAL: TSC is not safe to use in SMP mode 00:18:12.264 EAL: TSC is not invariant 00:18:12.264 [2024-05-16 07:31:05.787306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.522 [2024-05-16 07:31:05.870183] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:12.522 [2024-05-16 07:31:05.872380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:13.087 Malloc_0 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:13.087 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:13.088 [ 00:18:13.088 { 00:18:13.088 "name": "Malloc_0", 00:18:13.088 "aliases": [ 00:18:13.088 "41ca8d55-1356-11ef-8e8f-9dd684e56d79" 00:18:13.088 ], 00:18:13.088 "product_name": "Malloc disk", 00:18:13.088 "block_size": 512, 00:18:13.088 "num_blocks": 262144, 00:18:13.088 "uuid": "41ca8d55-1356-11ef-8e8f-9dd684e56d79", 00:18:13.088 "assigned_rate_limits": { 00:18:13.088 "rw_ios_per_sec": 0, 00:18:13.088 "rw_mbytes_per_sec": 0, 00:18:13.088 "r_mbytes_per_sec": 0, 00:18:13.088 "w_mbytes_per_sec": 0 00:18:13.088 }, 00:18:13.088 "claimed": false, 00:18:13.088 "zoned": false, 00:18:13.088 "supported_io_types": { 00:18:13.088 "read": true, 00:18:13.088 "write": true, 00:18:13.088 "unmap": true, 00:18:13.088 "write_zeroes": true, 00:18:13.088 "flush": true, 00:18:13.088 "reset": true, 00:18:13.088 "compare": false, 00:18:13.088 "compare_and_write": false, 00:18:13.088 "abort": true, 00:18:13.088 "nvme_admin": false, 00:18:13.088 "nvme_io": false 00:18:13.088 }, 00:18:13.088 "memory_domains": [ 00:18:13.088 { 00:18:13.088 "dma_device_id": "system", 00:18:13.088 "dma_device_type": 1 00:18:13.088 }, 00:18:13.088 { 00:18:13.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.088 "dma_device_type": 2 00:18:13.088 } 00:18:13.088 ], 00:18:13.088 "driver_specific": {} 00:18:13.088 } 00:18:13.088 ] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:13.088 Null_1 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:13.088 [ 00:18:13.088 { 00:18:13.088 "name": "Null_1", 00:18:13.088 "aliases": [ 00:18:13.088 "41d00b15-1356-11ef-8e8f-9dd684e56d79" 00:18:13.088 ], 00:18:13.088 "product_name": "Null disk", 00:18:13.088 "block_size": 512, 00:18:13.088 "num_blocks": 262144, 00:18:13.088 "uuid": "41d00b15-1356-11ef-8e8f-9dd684e56d79", 00:18:13.088 "assigned_rate_limits": { 00:18:13.088 "rw_ios_per_sec": 0, 00:18:13.088 "rw_mbytes_per_sec": 0, 00:18:13.088 "r_mbytes_per_sec": 0, 00:18:13.088 "w_mbytes_per_sec": 0 00:18:13.088 }, 00:18:13.088 "claimed": false, 00:18:13.088 "zoned": false, 00:18:13.088 "supported_io_types": { 00:18:13.088 "read": true, 00:18:13.088 "write": true, 00:18:13.088 "unmap": false, 00:18:13.088 "write_zeroes": true, 00:18:13.088 "flush": false, 00:18:13.088 "reset": true, 00:18:13.088 "compare": false, 00:18:13.088 "compare_and_write": false, 00:18:13.088 "abort": true, 00:18:13.088 "nvme_admin": false, 00:18:13.088 "nvme_io": false 00:18:13.088 }, 00:18:13.088 "driver_specific": {} 00:18:13.088 } 00:18:13.088 ] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:13.088 07:31:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:18:13.088 Running I/O for 60 seconds... 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 674269.69 2697078.75 0.00 0.00 2906112.00 0.00 0.00 ' 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=674269.69 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 674269 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=674269 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=168000 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 168000 -gt 1000 ']' 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 168000 Malloc_0 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 168000 IOPS Malloc_0 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.644 07:31:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:19.644 ************************************ 00:18:19.644 START TEST bdev_qos_iops 00:18:19.644 ************************************ 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 168000 IOPS Malloc_0 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=168000 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:18:19.644 07:31:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 168192.28 672769.14 0.00 0.00 720384.00 0.00 0.00 ' 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=168192.28 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 168192 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=168192 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=151200 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=184800 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 168192 -lt 151200 ']' 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 168192 -gt 184800 ']' 00:18:24.930 00:18:24.930 real 0m5.520s 00:18:24.930 user 0m0.139s 00:18:24.930 sys 0m0.024s 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:24.930 ************************************ 00:18:24.930 07:31:17 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:18:24.930 END TEST bdev_qos_iops 00:18:24.930 ************************************ 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:18:24.930 07:31:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 452237.64 1808950.55 0.00 0.00 1941504.00 0.00 0.00 ' 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1941504.00 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1941504 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1941504 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=189 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 189 -lt 2 ']' 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 189 Null_1 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 189 BANDWIDTH Null_1 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:30.194 07:31:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:30.194 ************************************ 00:18:30.194 START TEST bdev_qos_bw 00:18:30.194 ************************************ 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 189 BANDWIDTH Null_1 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=189 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:18:30.194 07:31:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 48346.18 193384.72 0.00 0.00 203016.00 0.00 0.00 ' 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=203016.00 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 203016 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=203016 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=193536 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=174182 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=212889 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 203016 -lt 174182 ']' 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 203016 -gt 212889 ']' 00:18:35.456 00:18:35.456 real 0m5.442s 00:18:35.456 user 0m0.113s 00:18:35.456 sys 0m0.032s 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:18:35.456 ************************************ 00:18:35.456 END TEST bdev_qos_bw 00:18:35.456 ************************************ 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:35.456 07:31:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:35.456 ************************************ 00:18:35.456 START TEST bdev_qos_ro_bw 00:18:35.456 ************************************ 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:18:35.456 07:31:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.91 2047.64 0.00 0.00 2212.00 0.00 0.00 ' 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2212.00 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2212 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2212 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2212 -lt 1843 ']' 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2212 -gt 2252 ']' 00:18:40.718 00:18:40.718 real 0m5.509s 00:18:40.718 user 0m0.095s 00:18:40.718 sys 0m0.039s 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.718 07:31:34 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:18:40.718 ************************************ 00:18:40.718 END TEST bdev_qos_ro_bw 00:18:40.718 ************************************ 00:18:40.718 07:31:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:18:40.718 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.718 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:41.283 00:18:41.283 Latency(us) 00:18:41.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.283 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:41.283 Malloc_0 : 28.02 227685.09 889.39 0.00 0.00 1114.13 300.37 501317.72 00:18:41.283 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:41.283 Null_1 : 28.05 327647.33 1279.87 0.00 0.00 781.02 62.42 24217.04 00:18:41.283 =================================================================================================================== 00:18:41.283 Total : 555332.41 2169.27 0.00 0.00 917.51 62.42 501317.72 00:18:41.283 0 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48992 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 48992 ']' 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 48992 00:18:41.283 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps -c -o command 48992 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # tail -1 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:18:41.284 killing process with pid 48992 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48992' 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 48992 00:18:41.284 Received shutdown signal, test time was about 28.066496 seconds 00:18:41.284 00:18:41.284 Latency(us) 00:18:41.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.284 =================================================================================================================== 00:18:41.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 48992 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:18:41.284 00:18:41.284 real 0m29.478s 00:18:41.284 user 0m30.233s 00:18:41.284 sys 0m0.751s 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:41.284 ************************************ 00:18:41.284 END TEST bdev_qos 00:18:41.284 ************************************ 00:18:41.284 07:31:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:18:41.284 07:31:34 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:18:41.284 07:31:34 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:41.284 07:31:34 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:41.284 07:31:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:41.284 ************************************ 00:18:41.284 START TEST bdev_qd_sampling 00:18:41.284 ************************************ 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=49213 00:18:41.284 Process bdev QD sampling period testing pid: 49213 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 49213' 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 49213 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 49213 ']' 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:41.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:41.284 07:31:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:41.284 [2024-05-16 07:31:34.843644] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:41.284 [2024-05-16 07:31:34.843850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:41.849 EAL: TSC is not safe to use in SMP mode 00:18:41.849 EAL: TSC is not invariant 00:18:41.849 [2024-05-16 07:31:35.300758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.849 [2024-05-16 07:31:35.399152] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.849 [2024-05-16 07:31:35.399226] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:41.849 [2024-05-16 07:31:35.402589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.849 [2024-05-16 07:31:35.402583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:42.418 Malloc_QD 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:42.418 [ 00:18:42.418 { 00:18:42.418 "name": "Malloc_QD", 00:18:42.418 "aliases": [ 00:18:42.418 "53698d1a-1356-11ef-8e8f-9dd684e56d79" 00:18:42.418 ], 00:18:42.418 "product_name": "Malloc disk", 00:18:42.418 "block_size": 512, 00:18:42.418 "num_blocks": 262144, 00:18:42.418 "uuid": "53698d1a-1356-11ef-8e8f-9dd684e56d79", 00:18:42.418 "assigned_rate_limits": { 00:18:42.418 "rw_ios_per_sec": 0, 00:18:42.418 "rw_mbytes_per_sec": 0, 00:18:42.418 "r_mbytes_per_sec": 0, 00:18:42.418 "w_mbytes_per_sec": 0 00:18:42.418 }, 00:18:42.418 "claimed": false, 00:18:42.418 "zoned": false, 00:18:42.418 "supported_io_types": { 00:18:42.418 "read": true, 00:18:42.418 "write": true, 00:18:42.418 "unmap": true, 00:18:42.418 "write_zeroes": true, 00:18:42.418 "flush": true, 00:18:42.418 "reset": true, 00:18:42.418 "compare": false, 00:18:42.418 "compare_and_write": false, 00:18:42.418 "abort": true, 00:18:42.418 "nvme_admin": false, 00:18:42.418 "nvme_io": false 00:18:42.418 }, 00:18:42.418 "memory_domains": [ 00:18:42.418 { 00:18:42.418 "dma_device_id": "system", 00:18:42.418 "dma_device_type": 1 00:18:42.418 }, 00:18:42.418 { 00:18:42.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.418 "dma_device_type": 2 00:18:42.418 } 00:18:42.418 ], 00:18:42.418 "driver_specific": {} 00:18:42.418 } 00:18:42.418 ] 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:18:42.418 07:31:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:42.678 Running I/O for 5 seconds... 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:18:44.581 "tick_rate": 2100006180, 00:18:44.581 "ticks": 650614518650, 00:18:44.581 "bdevs": [ 00:18:44.581 { 00:18:44.581 "name": "Malloc_QD", 00:18:44.581 "bytes_read": 13731140096, 00:18:44.581 "num_read_ops": 3352323, 00:18:44.581 "bytes_written": 0, 00:18:44.581 "num_write_ops": 0, 00:18:44.581 "bytes_unmapped": 0, 00:18:44.581 "num_unmap_ops": 0, 00:18:44.581 "bytes_copied": 0, 00:18:44.581 "num_copy_ops": 0, 00:18:44.581 "read_latency_ticks": 2182915001112, 00:18:44.581 "max_read_latency_ticks": 1325122, 00:18:44.581 "min_read_latency_ticks": 35020, 00:18:44.581 "write_latency_ticks": 0, 00:18:44.581 "max_write_latency_ticks": 0, 00:18:44.581 "min_write_latency_ticks": 0, 00:18:44.581 "unmap_latency_ticks": 0, 00:18:44.581 "max_unmap_latency_ticks": 0, 00:18:44.581 "min_unmap_latency_ticks": 0, 00:18:44.581 "copy_latency_ticks": 0, 00:18:44.581 "max_copy_latency_ticks": 0, 00:18:44.581 "min_copy_latency_ticks": 0, 00:18:44.581 "io_error": {}, 00:18:44.581 "queue_depth_polling_period": 10, 00:18:44.581 "queue_depth": 512, 00:18:44.581 "io_time": 400, 00:18:44.581 "weighted_io_time": 220160 00:18:44.581 } 00:18:44.581 ] 00:18:44.581 }' 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:18:44.581 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:18:44.582 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:18:44.582 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:18:44.582 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:18:44.582 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.582 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:44.582 00:18:44.582 Latency(us) 00:18:44.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.582 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:18:44.582 Malloc_QD : 2.06 849445.49 3318.15 0.00 0.00 301.09 50.71 573.44 00:18:44.582 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:44.582 Malloc_QD : 2.06 800498.45 3126.95 0.00 0.00 319.50 63.39 631.95 00:18:44.582 =================================================================================================================== 00:18:44.582 Total : 1649943.93 6445.09 0.00 0.00 310.02 50.71 631.95 00:18:44.841 0 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 49213 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 49213 ']' 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 49213 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps -c -o command 49213 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # tail -1 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:18:44.841 killing process with pid 49213 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49213' 00:18:44.841 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 49213 00:18:44.842 Received shutdown signal, test time was about 2.089817 seconds 00:18:44.842 00:18:44.842 Latency(us) 00:18:44.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.842 =================================================================================================================== 00:18:44.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.842 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 49213 00:18:44.842 07:31:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:18:44.842 00:18:44.842 real 0m3.496s 00:18:44.842 user 0m6.444s 00:18:44.842 sys 0m0.578s 00:18:44.842 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:44.842 ************************************ 00:18:44.842 END TEST bdev_qd_sampling 00:18:44.842 ************************************ 00:18:44.842 07:31:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 07:31:38 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:18:44.842 07:31:38 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:44.842 07:31:38 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:44.842 07:31:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 ************************************ 00:18:44.842 START TEST bdev_error 00:18:44.842 ************************************ 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=49260 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 49260' 00:18:44.842 Process error testing pid: 49260 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 49260 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 49260 ']' 00:18:44.842 07:31:38 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.842 07:31:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.842 [2024-05-16 07:31:38.389848] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:44.842 [2024-05-16 07:31:38.390053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:45.410 EAL: TSC is not safe to use in SMP mode 00:18:45.410 EAL: TSC is not invariant 00:18:45.410 [2024-05-16 07:31:38.908121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.671 [2024-05-16 07:31:38.987326] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:45.671 [2024-05-16 07:31:38.989354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:18:45.957 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 Dev_1 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 [ 00:18:45.957 { 00:18:45.957 "name": "Dev_1", 00:18:45.957 "aliases": [ 00:18:45.957 "558191e1-1356-11ef-8e8f-9dd684e56d79" 00:18:45.957 ], 00:18:45.957 "product_name": "Malloc disk", 00:18:45.957 "block_size": 512, 00:18:45.957 "num_blocks": 262144, 00:18:45.957 "uuid": "558191e1-1356-11ef-8e8f-9dd684e56d79", 00:18:45.957 "assigned_rate_limits": { 00:18:45.957 "rw_ios_per_sec": 0, 00:18:45.957 "rw_mbytes_per_sec": 0, 00:18:45.957 "r_mbytes_per_sec": 0, 00:18:45.957 "w_mbytes_per_sec": 0 00:18:45.957 }, 00:18:45.957 "claimed": false, 00:18:45.957 "zoned": false, 00:18:45.957 "supported_io_types": { 00:18:45.957 "read": true, 00:18:45.957 "write": true, 00:18:45.957 "unmap": true, 00:18:45.957 "write_zeroes": true, 00:18:45.957 "flush": true, 00:18:45.957 "reset": true, 00:18:45.957 "compare": false, 00:18:45.957 "compare_and_write": false, 00:18:45.957 "abort": true, 00:18:45.957 "nvme_admin": false, 00:18:45.957 "nvme_io": false 00:18:45.957 }, 00:18:45.957 "memory_domains": [ 00:18:45.957 { 00:18:45.957 "dma_device_id": "system", 00:18:45.957 "dma_device_type": 1 00:18:45.957 }, 00:18:45.957 { 00:18:45.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.957 "dma_device_type": 2 00:18:45.957 } 00:18:45.957 ], 00:18:45.957 "driver_specific": {} 00:18:45.957 } 00:18:45.957 ] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:18:45.957 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 true 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 Dev_2 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:18:45.957 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.958 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.958 [ 00:18:45.958 { 00:18:45.958 "name": "Dev_2", 00:18:45.958 "aliases": [ 00:18:45.958 "5588e44e-1356-11ef-8e8f-9dd684e56d79" 00:18:45.958 ], 00:18:45.958 "product_name": "Malloc disk", 00:18:45.958 "block_size": 512, 00:18:45.958 "num_blocks": 262144, 00:18:45.958 "uuid": "5588e44e-1356-11ef-8e8f-9dd684e56d79", 00:18:45.958 "assigned_rate_limits": { 00:18:45.958 "rw_ios_per_sec": 0, 00:18:45.958 "rw_mbytes_per_sec": 0, 00:18:45.958 "r_mbytes_per_sec": 0, 00:18:45.958 "w_mbytes_per_sec": 0 00:18:45.958 }, 00:18:45.958 "claimed": false, 00:18:45.958 "zoned": false, 00:18:45.958 "supported_io_types": { 00:18:45.958 "read": true, 00:18:45.958 "write": true, 00:18:45.958 "unmap": true, 00:18:45.958 "write_zeroes": true, 00:18:45.958 "flush": true, 00:18:45.958 "reset": true, 00:18:45.958 "compare": false, 00:18:45.958 "compare_and_write": false, 00:18:45.958 "abort": true, 00:18:45.958 "nvme_admin": false, 00:18:45.958 "nvme_io": false 00:18:45.958 }, 00:18:45.958 "memory_domains": [ 00:18:45.958 { 00:18:45.958 "dma_device_id": "system", 00:18:45.958 "dma_device_type": 1 00:18:45.958 }, 00:18:45.958 { 00:18:45.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.958 "dma_device_type": 2 00:18:45.958 } 00:18:45.958 ], 00:18:45.958 "driver_specific": {} 00:18:45.958 } 00:18:45.958 ] 00:18:45.958 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.958 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:18:45.958 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:18:45.958 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.217 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.217 07:31:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.217 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:18:46.217 07:31:39 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:18:46.217 Running I/O for 5 seconds... 00:18:47.153 07:31:40 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 49260 00:18:47.153 Process is existed as continue on error is set. Pid: 49260 00:18:47.153 07:31:40 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 49260' 00:18:47.153 07:31:40 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:18:47.153 07:31:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.153 07:31:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:47.153 07:31:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.153 07:31:40 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:18:47.153 07:31:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.153 07:31:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:47.153 07:31:40 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.153 07:31:40 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:18:47.411 Timeout while waiting for response: 00:18:47.411 00:18:47.411 00:18:51.655 00:18:51.655 Latency(us) 00:18:51.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.655 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:51.655 EE_Dev_1 : 0.95 366240.98 1430.63 5.25 0.00 43.43 17.92 112.64 00:18:51.655 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:51.655 Dev_2 : 5.00 803076.10 3137.02 0.00 0.00 19.69 6.89 18849.35 00:18:51.655 =================================================================================================================== 00:18:51.655 Total : 1169317.08 4567.64 5.25 0.00 21.58 6.89 18849.35 00:18:52.589 07:31:45 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 49260 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 49260 ']' 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 49260 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps -c -o command 49260 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # tail -1 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:18:52.589 killing process with pid 49260 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49260' 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 49260 00:18:52.589 Received shutdown signal, test time was about 5.000000 seconds 00:18:52.589 00:18:52.589 Latency(us) 00:18:52.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.589 =================================================================================================================== 00:18:52.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.589 07:31:45 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 49260 00:18:52.589 07:31:46 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=49300 00:18:52.589 Process error testing pid: 49300 00:18:52.589 07:31:46 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 49300' 00:18:52.589 07:31:46 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 49300 00:18:52.589 07:31:46 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 49300 ']' 00:18:52.589 07:31:46 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.590 07:31:46 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:52.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.590 07:31:46 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.590 07:31:46 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:52.590 07:31:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:52.590 07:31:46 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:18:52.848 [2024-05-16 07:31:46.160489] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:52.848 [2024-05-16 07:31:46.160803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:53.105 EAL: TSC is not safe to use in SMP mode 00:18:53.105 EAL: TSC is not invariant 00:18:53.105 [2024-05-16 07:31:46.646848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.363 [2024-05-16 07:31:46.734153] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:53.363 [2024-05-16 07:31:46.736411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:18:53.929 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 Dev_1 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 [ 00:18:53.929 { 00:18:53.929 "name": "Dev_1", 00:18:53.929 "aliases": [ 00:18:53.929 "5a299db0-1356-11ef-8e8f-9dd684e56d79" 00:18:53.929 ], 00:18:53.929 "product_name": "Malloc disk", 00:18:53.929 "block_size": 512, 00:18:53.929 "num_blocks": 262144, 00:18:53.929 "uuid": "5a299db0-1356-11ef-8e8f-9dd684e56d79", 00:18:53.929 "assigned_rate_limits": { 00:18:53.929 "rw_ios_per_sec": 0, 00:18:53.929 "rw_mbytes_per_sec": 0, 00:18:53.929 "r_mbytes_per_sec": 0, 00:18:53.929 "w_mbytes_per_sec": 0 00:18:53.929 }, 00:18:53.929 "claimed": false, 00:18:53.929 "zoned": false, 00:18:53.929 "supported_io_types": { 00:18:53.929 "read": true, 00:18:53.929 "write": true, 00:18:53.929 "unmap": true, 00:18:53.929 "write_zeroes": true, 00:18:53.929 "flush": true, 00:18:53.929 "reset": true, 00:18:53.929 "compare": false, 00:18:53.929 "compare_and_write": false, 00:18:53.929 "abort": true, 00:18:53.929 "nvme_admin": false, 00:18:53.929 "nvme_io": false 00:18:53.929 }, 00:18:53.929 "memory_domains": [ 00:18:53.929 { 00:18:53.929 "dma_device_id": "system", 00:18:53.929 "dma_device_type": 1 00:18:53.929 }, 00:18:53.929 { 00:18:53.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.929 "dma_device_type": 2 00:18:53.929 } 00:18:53.929 ], 00:18:53.929 "driver_specific": {} 00:18:53.929 } 00:18:53.929 ] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:18:53.929 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 true 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 Dev_2 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.929 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.929 [ 00:18:53.929 { 00:18:53.929 "name": "Dev_2", 00:18:53.929 "aliases": [ 00:18:53.929 "5a2f1aeb-1356-11ef-8e8f-9dd684e56d79" 00:18:53.929 ], 00:18:53.929 "product_name": "Malloc disk", 00:18:53.929 "block_size": 512, 00:18:53.929 "num_blocks": 262144, 00:18:53.929 "uuid": "5a2f1aeb-1356-11ef-8e8f-9dd684e56d79", 00:18:53.929 "assigned_rate_limits": { 00:18:53.929 "rw_ios_per_sec": 0, 00:18:53.929 "rw_mbytes_per_sec": 0, 00:18:53.929 "r_mbytes_per_sec": 0, 00:18:53.929 "w_mbytes_per_sec": 0 00:18:53.929 }, 00:18:53.929 "claimed": false, 00:18:53.929 "zoned": false, 00:18:53.929 "supported_io_types": { 00:18:53.929 "read": true, 00:18:53.930 "write": true, 00:18:53.930 "unmap": true, 00:18:53.930 "write_zeroes": true, 00:18:53.930 "flush": true, 00:18:53.930 "reset": true, 00:18:53.930 "compare": false, 00:18:53.930 "compare_and_write": false, 00:18:53.930 "abort": true, 00:18:53.930 "nvme_admin": false, 00:18:53.930 "nvme_io": false 00:18:53.930 }, 00:18:53.930 "memory_domains": [ 00:18:53.930 { 00:18:53.930 "dma_device_id": "system", 00:18:53.930 "dma_device_type": 1 00:18:53.930 }, 00:18:53.930 { 00:18:53.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.930 "dma_device_type": 2 00:18:53.930 } 00:18:53.930 ], 00:18:53.930 "driver_specific": {} 00:18:53.930 } 00:18:53.930 ] 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:18:53.930 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.930 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 49300 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:18:53.930 07:31:47 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 49300 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.930 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 49300 00:18:54.246 Running I/O for 5 seconds... 00:18:54.246 task offset: 163816 on job bdev=EE_Dev_1 fails 00:18:54.246 00:18:54.246 Latency(us) 00:18:54.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.246 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:54.246 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:18:54.246 EE_Dev_1 : 0.00 160583.94 627.28 36496.35 0.00 67.10 20.36 122.39 00:18:54.246 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:18:54.246 Dev_2 : 0.00 184971.10 722.54 0.00 0.00 49.90 29.62 75.58 00:18:54.246 =================================================================================================================== 00:18:54.246 Total : 345555.04 1349.82 36496.35 0.00 57.77 20.36 122.39 00:18:54.246 [2024-05-16 07:31:47.498856] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:54.246 request: 00:18:54.246 { 00:18:54.246 "method": "perform_tests", 00:18:54.246 "req_id": 1 00:18:54.246 } 00:18:54.246 Got JSON-RPC error response 00:18:54.246 response: 00:18:54.246 { 00:18:54.246 "code": -32603, 00:18:54.246 "message": "bdevperf failed with error Operation not permitted" 00:18:54.246 } 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.246 00:18:54.246 real 0m9.325s 00:18:54.246 user 0m9.593s 00:18:54.246 sys 0m1.187s 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:54.246 ************************************ 00:18:54.246 END TEST bdev_error 00:18:54.246 ************************************ 00:18:54.246 07:31:47 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:18:54.246 07:31:47 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:18:54.246 07:31:47 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:54.246 07:31:47 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:54.246 07:31:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:54.246 ************************************ 00:18:54.246 START TEST bdev_stat 00:18:54.246 ************************************ 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=49331 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 49331' 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:18:54.246 Process Bdev IO statistics testing pid: 49331 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 49331 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 49331 ']' 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.246 07:31:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:54.246 [2024-05-16 07:31:47.754448] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:54.246 [2024-05-16 07:31:47.754654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:54.815 EAL: TSC is not safe to use in SMP mode 00:18:54.815 EAL: TSC is not invariant 00:18:54.815 [2024-05-16 07:31:48.229052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:54.815 [2024-05-16 07:31:48.310891] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:54.815 [2024-05-16 07:31:48.310961] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:54.815 [2024-05-16 07:31:48.313660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.815 [2024-05-16 07:31:48.313652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:55.382 Malloc_STAT 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:55.382 [ 00:18:55.382 { 00:18:55.382 "name": "Malloc_STAT", 00:18:55.382 "aliases": [ 00:18:55.382 "5b1e7539-1356-11ef-8e8f-9dd684e56d79" 00:18:55.382 ], 00:18:55.382 "product_name": "Malloc disk", 00:18:55.382 "block_size": 512, 00:18:55.382 "num_blocks": 262144, 00:18:55.382 "uuid": "5b1e7539-1356-11ef-8e8f-9dd684e56d79", 00:18:55.382 "assigned_rate_limits": { 00:18:55.382 "rw_ios_per_sec": 0, 00:18:55.382 "rw_mbytes_per_sec": 0, 00:18:55.382 "r_mbytes_per_sec": 0, 00:18:55.382 "w_mbytes_per_sec": 0 00:18:55.382 }, 00:18:55.382 "claimed": false, 00:18:55.382 "zoned": false, 00:18:55.382 "supported_io_types": { 00:18:55.382 "read": true, 00:18:55.382 "write": true, 00:18:55.382 "unmap": true, 00:18:55.382 "write_zeroes": true, 00:18:55.382 "flush": true, 00:18:55.382 "reset": true, 00:18:55.382 "compare": false, 00:18:55.382 "compare_and_write": false, 00:18:55.382 "abort": true, 00:18:55.382 "nvme_admin": false, 00:18:55.382 "nvme_io": false 00:18:55.382 }, 00:18:55.382 "memory_domains": [ 00:18:55.382 { 00:18:55.382 "dma_device_id": "system", 00:18:55.382 "dma_device_type": 1 00:18:55.382 }, 00:18:55.382 { 00:18:55.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.382 "dma_device_type": 2 00:18:55.382 } 00:18:55.382 ], 00:18:55.382 "driver_specific": {} 00:18:55.382 } 00:18:55.382 ] 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:18:55.382 07:31:48 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:55.640 Running I/O for 10 seconds... 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.610 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:18:57.610 "tick_rate": 2100006180, 00:18:57.610 "ticks": 677766997602, 00:18:57.610 "bdevs": [ 00:18:57.610 { 00:18:57.610 "name": "Malloc_STAT", 00:18:57.610 "bytes_read": 14225019392, 00:18:57.610 "num_read_ops": 3472899, 00:18:57.610 "bytes_written": 0, 00:18:57.610 "num_write_ops": 0, 00:18:57.610 "bytes_unmapped": 0, 00:18:57.610 "num_unmap_ops": 0, 00:18:57.610 "bytes_copied": 0, 00:18:57.610 "num_copy_ops": 0, 00:18:57.610 "read_latency_ticks": 2127167826284, 00:18:57.610 "max_read_latency_ticks": 1143242, 00:18:57.610 "min_read_latency_ticks": 32280, 00:18:57.610 "write_latency_ticks": 0, 00:18:57.610 "max_write_latency_ticks": 0, 00:18:57.610 "min_write_latency_ticks": 0, 00:18:57.610 "unmap_latency_ticks": 0, 00:18:57.611 "max_unmap_latency_ticks": 0, 00:18:57.611 "min_unmap_latency_ticks": 0, 00:18:57.611 "copy_latency_ticks": 0, 00:18:57.611 "max_copy_latency_ticks": 0, 00:18:57.611 "min_copy_latency_ticks": 0, 00:18:57.611 "io_error": {} 00:18:57.611 } 00:18:57.611 ] 00:18:57.611 }' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3472899 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:18:57.611 "tick_rate": 2100006180, 00:18:57.611 "ticks": 677828700794, 00:18:57.611 "name": "Malloc_STAT", 00:18:57.611 "channels": [ 00:18:57.611 { 00:18:57.611 "thread_id": 2, 00:18:57.611 "bytes_read": 7186939904, 00:18:57.611 "num_read_ops": 1754624, 00:18:57.611 "bytes_written": 0, 00:18:57.611 "num_write_ops": 0, 00:18:57.611 "bytes_unmapped": 0, 00:18:57.611 "num_unmap_ops": 0, 00:18:57.611 "bytes_copied": 0, 00:18:57.611 "num_copy_ops": 0, 00:18:57.611 "read_latency_ticks": 1079296691332, 00:18:57.611 "max_read_latency_ticks": 1143242, 00:18:57.611 "min_read_latency_ticks": 541890, 00:18:57.611 "write_latency_ticks": 0, 00:18:57.611 "max_write_latency_ticks": 0, 00:18:57.611 "min_write_latency_ticks": 0, 00:18:57.611 "unmap_latency_ticks": 0, 00:18:57.611 "max_unmap_latency_ticks": 0, 00:18:57.611 "min_unmap_latency_ticks": 0, 00:18:57.611 "copy_latency_ticks": 0, 00:18:57.611 "max_copy_latency_ticks": 0, 00:18:57.611 "min_copy_latency_ticks": 0 00:18:57.611 }, 00:18:57.611 { 00:18:57.611 "thread_id": 3, 00:18:57.611 "bytes_read": 7244611584, 00:18:57.611 "num_read_ops": 1768704, 00:18:57.611 "bytes_written": 0, 00:18:57.611 "num_write_ops": 0, 00:18:57.611 "bytes_unmapped": 0, 00:18:57.611 "num_unmap_ops": 0, 00:18:57.611 "bytes_copied": 0, 00:18:57.611 "num_copy_ops": 0, 00:18:57.611 "read_latency_ticks": 1079380909344, 00:18:57.611 "max_read_latency_ticks": 1140248, 00:18:57.611 "min_read_latency_ticks": 536272, 00:18:57.611 "write_latency_ticks": 0, 00:18:57.611 "max_write_latency_ticks": 0, 00:18:57.611 "min_write_latency_ticks": 0, 00:18:57.611 "unmap_latency_ticks": 0, 00:18:57.611 "max_unmap_latency_ticks": 0, 00:18:57.611 "min_unmap_latency_ticks": 0, 00:18:57.611 "copy_latency_ticks": 0, 00:18:57.611 "max_copy_latency_ticks": 0, 00:18:57.611 "min_copy_latency_ticks": 0 00:18:57.611 } 00:18:57.611 ] 00:18:57.611 }' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1754624 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1754624 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1768704 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3523328 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:18:57.611 "tick_rate": 2100006180, 00:18:57.611 "ticks": 677906917014, 00:18:57.611 "bdevs": [ 00:18:57.611 { 00:18:57.611 "name": "Malloc_STAT", 00:18:57.611 "bytes_read": 14697927168, 00:18:57.611 "num_read_ops": 3588355, 00:18:57.611 "bytes_written": 0, 00:18:57.611 "num_write_ops": 0, 00:18:57.611 "bytes_unmapped": 0, 00:18:57.611 "num_unmap_ops": 0, 00:18:57.611 "bytes_copied": 0, 00:18:57.611 "num_copy_ops": 0, 00:18:57.611 "read_latency_ticks": 2198660218984, 00:18:57.611 "max_read_latency_ticks": 1143242, 00:18:57.611 "min_read_latency_ticks": 32280, 00:18:57.611 "write_latency_ticks": 0, 00:18:57.611 "max_write_latency_ticks": 0, 00:18:57.611 "min_write_latency_ticks": 0, 00:18:57.611 "unmap_latency_ticks": 0, 00:18:57.611 "max_unmap_latency_ticks": 0, 00:18:57.611 "min_unmap_latency_ticks": 0, 00:18:57.611 "copy_latency_ticks": 0, 00:18:57.611 "max_copy_latency_ticks": 0, 00:18:57.611 "min_copy_latency_ticks": 0, 00:18:57.611 "io_error": {} 00:18:57.611 } 00:18:57.611 ] 00:18:57.611 }' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3588355 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3523328 -lt 3472899 ']' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3523328 -gt 3588355 ']' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:57.611 00:18:57.611 Latency(us) 00:18:57.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.611 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:18:57.611 Malloc_STAT : 2.07 872726.43 3409.09 0.00 0.00 293.06 65.34 546.13 00:18:57.611 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:18:57.611 Malloc_STAT : 2.07 879781.54 3436.65 0.00 0.00 290.70 50.71 546.13 00:18:57.611 =================================================================================================================== 00:18:57.611 Total : 1752507.97 6845.73 0.00 0.00 291.88 50.71 546.13 00:18:57.611 0 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 49331 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 49331 ']' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 49331 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps -c -o command 49331 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # tail -1 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:18:57.611 killing process with pid 49331 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49331' 00:18:57.611 Received shutdown signal, test time was about 2.105342 seconds 00:18:57.611 00:18:57.611 Latency(us) 00:18:57.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.611 =================================================================================================================== 00:18:57.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 49331 00:18:57.611 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 49331 00:18:57.871 07:31:51 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:18:57.871 00:18:57.871 real 0m3.579s 00:18:57.871 user 0m6.648s 00:18:57.871 sys 0m0.665s 00:18:57.871 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:57.871 07:31:51 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:18:57.871 ************************************ 00:18:57.871 END TEST bdev_stat 00:18:57.871 ************************************ 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:18:57.871 07:31:51 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:18:57.871 00:18:57.871 real 1m33.236s 00:18:57.871 user 4m31.377s 00:18:57.871 sys 0m25.889s 00:18:57.871 07:31:51 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:57.871 07:31:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:18:57.871 ************************************ 00:18:57.871 END TEST blockdev_general 00:18:57.871 ************************************ 00:18:57.871 07:31:51 -- spdk/autotest.sh@186 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:57.871 07:31:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:57.871 07:31:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:57.871 07:31:51 -- common/autotest_common.sh@10 -- # set +x 00:18:57.871 ************************************ 00:18:57.871 START TEST bdev_raid 00:18:57.871 ************************************ 00:18:57.871 07:31:51 bdev_raid -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:58.133 * Looking for test storage... 00:18:58.133 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:58.133 07:31:51 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@788 -- # trap 'on_error_exit;' ERR 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@790 -- # base_blocklen=512 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@792 -- # uname -s 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@792 -- # '[' FreeBSD = Linux ']' 00:18:58.133 07:31:51 bdev_raid -- bdev/bdev_raid.sh@799 -- # run_test raid0_resize_test raid0_resize_test 00:18:58.133 07:31:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:58.133 07:31:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:58.133 07:31:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.133 ************************************ 00:18:58.133 START TEST raid0_resize_test 00:18:58.133 ************************************ 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # raid_pid=49431 00:18:58.133 Process raid pid: 49431 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # echo 'Process raid pid: 49431' 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # waitforlisten 49431 /var/tmp/spdk-raid.sock 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 49431 ']' 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:58.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.133 07:31:51 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:58.133 [2024-05-16 07:31:51.653417] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:58.133 [2024-05-16 07:31:51.653673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:58.704 EAL: TSC is not safe to use in SMP mode 00:18:58.704 EAL: TSC is not invariant 00:18:58.704 [2024-05-16 07:31:52.122983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.704 [2024-05-16 07:31:52.203526] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:58.704 [2024-05-16 07:31:52.205576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.704 [2024-05-16 07:31:52.206295] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.704 [2024-05-16 07:31:52.206307] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.271 07:31:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:59.271 07:31:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:18:59.271 07:31:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:18:59.271 Base_1 00:18:59.271 07:31:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:18:59.529 Base_2 00:18:59.786 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@363 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:18:59.786 [2024-05-16 07:31:53.308569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:18:59.786 [2024-05-16 07:31:53.309001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:18:59.786 [2024-05-16 07:31:53.309021] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aebba00 00:18:59.786 [2024-05-16 07:31:53.309025] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:59.786 [2024-05-16 07:31:53.309055] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82af1ee20 00:18:59.786 [2024-05-16 07:31:53.309101] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aebba00 00:18:59.786 [2024-05-16 07:31:53.309105] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82aebba00 00:18:59.786 [2024-05-16 07:31:53.309131] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.786 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:19:00.045 [2024-05-16 07:31:53.524553] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:00.045 [2024-05-16 07:31:53.524571] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:19:00.045 true 00:19:00.045 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # jq '.[].num_blocks' 00:19:00.045 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:19:00.305 [2024-05-16 07:31:53.752563] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.305 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # blkcnt=131072 00:19:00.305 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # raid_size_mb=64 00:19:00.305 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # '[' 64 '!=' 64 ']' 00:19:00.305 07:31:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:19:00.564 [2024-05-16 07:31:53.984552] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:00.564 [2024-05-16 07:31:53.984570] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:19:00.564 [2024-05-16 07:31:53.984592] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:19:00.564 true 00:19:00.564 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # jq '.[].num_blocks' 00:19:00.564 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:19:00.821 [2024-05-16 07:31:54.220568] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # blkcnt=262144 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # raid_size_mb=128 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 49431 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 49431 ']' 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 49431 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps -c -o command 49431 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # tail -1 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:00.821 killing process with pid 49431 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49431' 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 49431 00:19:00.821 [2024-05-16 07:31:54.251061] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.821 [2024-05-16 07:31:54.251076] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.821 [2024-05-16 07:31:54.251095] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.821 [2024-05-16 07:31:54.251099] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aebba00 name Raid, state offline 00:19:00.821 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 49431 00:19:00.821 [2024-05-16 07:31:54.251215] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.080 07:31:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:19:01.080 00:19:01.080 real 0m2.774s 00:19:01.080 user 0m4.079s 00:19:01.080 sys 0m0.740s 00:19:01.080 ************************************ 00:19:01.080 END TEST raid0_resize_test 00:19:01.080 ************************************ 00:19:01.080 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:01.080 07:31:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.080 07:31:54 bdev_raid -- bdev/bdev_raid.sh@801 -- # for n in {2..4} 00:19:01.080 07:31:54 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:19:01.080 07:31:54 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:19:01.080 07:31:54 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:01.080 07:31:54 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:01.080 07:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.080 ************************************ 00:19:01.080 START TEST raid_state_function_test 00:19:01.080 ************************************ 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=49481 00:19:01.080 Process raid pid: 49481 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49481' 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 49481 /var/tmp/spdk-raid.sock 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 49481 ']' 00:19:01.080 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:01.081 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:01.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:01.081 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:01.081 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:01.081 07:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.081 07:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:01.081 [2024-05-16 07:31:54.472346] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:01.081 [2024-05-16 07:31:54.472529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:01.646 EAL: TSC is not safe to use in SMP mode 00:19:01.646 EAL: TSC is not invariant 00:19:01.646 [2024-05-16 07:31:54.918396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.646 [2024-05-16 07:31:54.998597] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:01.646 [2024-05-16 07:31:55.000662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.646 [2024-05-16 07:31:55.001365] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.646 [2024-05-16 07:31:55.001384] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:02.214 [2024-05-16 07:31:55.711530] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:02.214 [2024-05-16 07:31:55.711583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:02.214 [2024-05-16 07:31:55.711587] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:02.214 [2024-05-16 07:31:55.711594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.214 07:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.472 07:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.472 "name": "Existed_Raid", 00:19:02.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.472 "strip_size_kb": 64, 00:19:02.472 "state": "configuring", 00:19:02.472 "raid_level": "raid0", 00:19:02.472 "superblock": false, 00:19:02.472 "num_base_bdevs": 2, 00:19:02.472 "num_base_bdevs_discovered": 0, 00:19:02.472 "num_base_bdevs_operational": 2, 00:19:02.472 "base_bdevs_list": [ 00:19:02.472 { 00:19:02.472 "name": "BaseBdev1", 00:19:02.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.472 "is_configured": false, 00:19:02.472 "data_offset": 0, 00:19:02.472 "data_size": 0 00:19:02.472 }, 00:19:02.472 { 00:19:02.472 "name": "BaseBdev2", 00:19:02.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.472 "is_configured": false, 00:19:02.472 "data_offset": 0, 00:19:02.472 "data_size": 0 00:19:02.472 } 00:19:02.472 ] 00:19:02.472 }' 00:19:02.472 07:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.472 07:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.731 07:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.296 [2024-05-16 07:31:56.571595] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.296 [2024-05-16 07:31:56.571618] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a85f500 name Existed_Raid, state configuring 00:19:03.296 07:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:03.296 [2024-05-16 07:31:56.835607] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.296 [2024-05-16 07:31:56.835650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.296 [2024-05-16 07:31:56.835654] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.296 [2024-05-16 07:31:56.835661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.296 07:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:03.554 [2024-05-16 07:31:57.104496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.554 BaseBdev1 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:03.554 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.810 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:04.068 [ 00:19:04.068 { 00:19:04.068 "name": "BaseBdev1", 00:19:04.068 "aliases": [ 00:19:04.068 "6008244a-1356-11ef-8e8f-9dd684e56d79" 00:19:04.068 ], 00:19:04.068 "product_name": "Malloc disk", 00:19:04.068 "block_size": 512, 00:19:04.068 "num_blocks": 65536, 00:19:04.068 "uuid": "6008244a-1356-11ef-8e8f-9dd684e56d79", 00:19:04.068 "assigned_rate_limits": { 00:19:04.068 "rw_ios_per_sec": 0, 00:19:04.068 "rw_mbytes_per_sec": 0, 00:19:04.068 "r_mbytes_per_sec": 0, 00:19:04.068 "w_mbytes_per_sec": 0 00:19:04.068 }, 00:19:04.068 "claimed": true, 00:19:04.068 "claim_type": "exclusive_write", 00:19:04.068 "zoned": false, 00:19:04.068 "supported_io_types": { 00:19:04.068 "read": true, 00:19:04.068 "write": true, 00:19:04.068 "unmap": true, 00:19:04.068 "write_zeroes": true, 00:19:04.068 "flush": true, 00:19:04.068 "reset": true, 00:19:04.068 "compare": false, 00:19:04.068 "compare_and_write": false, 00:19:04.068 "abort": true, 00:19:04.068 "nvme_admin": false, 00:19:04.069 "nvme_io": false 00:19:04.069 }, 00:19:04.069 "memory_domains": [ 00:19:04.069 { 00:19:04.069 "dma_device_id": "system", 00:19:04.069 "dma_device_type": 1 00:19:04.069 }, 00:19:04.069 { 00:19:04.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.069 "dma_device_type": 2 00:19:04.069 } 00:19:04.069 ], 00:19:04.069 "driver_specific": {} 00:19:04.069 } 00:19:04.069 ] 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.327 "name": "Existed_Raid", 00:19:04.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.327 "strip_size_kb": 64, 00:19:04.327 "state": "configuring", 00:19:04.327 "raid_level": "raid0", 00:19:04.327 "superblock": false, 00:19:04.327 "num_base_bdevs": 2, 00:19:04.327 "num_base_bdevs_discovered": 1, 00:19:04.327 "num_base_bdevs_operational": 2, 00:19:04.327 "base_bdevs_list": [ 00:19:04.327 { 00:19:04.327 "name": "BaseBdev1", 00:19:04.327 "uuid": "6008244a-1356-11ef-8e8f-9dd684e56d79", 00:19:04.327 "is_configured": true, 00:19:04.327 "data_offset": 0, 00:19:04.327 "data_size": 65536 00:19:04.327 }, 00:19:04.327 { 00:19:04.327 "name": "BaseBdev2", 00:19:04.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.327 "is_configured": false, 00:19:04.327 "data_offset": 0, 00:19:04.327 "data_size": 0 00:19:04.327 } 00:19:04.327 ] 00:19:04.327 }' 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.327 07:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.894 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:04.894 [2024-05-16 07:31:58.343638] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.894 [2024-05-16 07:31:58.343665] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a85f500 name Existed_Raid, state configuring 00:19:04.894 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:05.153 [2024-05-16 07:31:58.559675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.153 [2024-05-16 07:31:58.560389] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.153 [2024-05-16 07:31:58.560437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.153 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.412 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.412 "name": "Existed_Raid", 00:19:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.412 "strip_size_kb": 64, 00:19:05.412 "state": "configuring", 00:19:05.412 "raid_level": "raid0", 00:19:05.412 "superblock": false, 00:19:05.412 "num_base_bdevs": 2, 00:19:05.412 "num_base_bdevs_discovered": 1, 00:19:05.412 "num_base_bdevs_operational": 2, 00:19:05.412 "base_bdevs_list": [ 00:19:05.412 { 00:19:05.412 "name": "BaseBdev1", 00:19:05.412 "uuid": "6008244a-1356-11ef-8e8f-9dd684e56d79", 00:19:05.412 "is_configured": true, 00:19:05.412 "data_offset": 0, 00:19:05.412 "data_size": 65536 00:19:05.412 }, 00:19:05.412 { 00:19:05.412 "name": "BaseBdev2", 00:19:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.412 "is_configured": false, 00:19:05.412 "data_offset": 0, 00:19:05.412 "data_size": 0 00:19:05.412 } 00:19:05.412 ] 00:19:05.412 }' 00:19:05.412 07:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.412 07:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.670 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:05.929 [2024-05-16 07:31:59.395841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.929 [2024-05-16 07:31:59.395870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a85fa00 00:19:05.929 [2024-05-16 07:31:59.395874] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:05.929 [2024-05-16 07:31:59.395894] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8c2ec0 00:19:05.929 [2024-05-16 07:31:59.395980] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a85fa00 00:19:05.929 [2024-05-16 07:31:59.395984] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a85fa00 00:19:05.929 [2024-05-16 07:31:59.396013] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.929 BaseBdev2 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:05.929 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.188 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:06.447 [ 00:19:06.447 { 00:19:06.447 "name": "BaseBdev2", 00:19:06.447 "aliases": [ 00:19:06.447 "6165e38b-1356-11ef-8e8f-9dd684e56d79" 00:19:06.447 ], 00:19:06.448 "product_name": "Malloc disk", 00:19:06.448 "block_size": 512, 00:19:06.448 "num_blocks": 65536, 00:19:06.448 "uuid": "6165e38b-1356-11ef-8e8f-9dd684e56d79", 00:19:06.448 "assigned_rate_limits": { 00:19:06.448 "rw_ios_per_sec": 0, 00:19:06.448 "rw_mbytes_per_sec": 0, 00:19:06.448 "r_mbytes_per_sec": 0, 00:19:06.448 "w_mbytes_per_sec": 0 00:19:06.448 }, 00:19:06.448 "claimed": true, 00:19:06.448 "claim_type": "exclusive_write", 00:19:06.448 "zoned": false, 00:19:06.448 "supported_io_types": { 00:19:06.448 "read": true, 00:19:06.448 "write": true, 00:19:06.448 "unmap": true, 00:19:06.448 "write_zeroes": true, 00:19:06.448 "flush": true, 00:19:06.448 "reset": true, 00:19:06.448 "compare": false, 00:19:06.448 "compare_and_write": false, 00:19:06.448 "abort": true, 00:19:06.448 "nvme_admin": false, 00:19:06.448 "nvme_io": false 00:19:06.448 }, 00:19:06.448 "memory_domains": [ 00:19:06.448 { 00:19:06.448 "dma_device_id": "system", 00:19:06.448 "dma_device_type": 1 00:19:06.448 }, 00:19:06.448 { 00:19:06.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.448 "dma_device_type": 2 00:19:06.448 } 00:19:06.448 ], 00:19:06.448 "driver_specific": {} 00:19:06.448 } 00:19:06.448 ] 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.448 07:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.707 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.707 "name": "Existed_Raid", 00:19:06.707 "uuid": "6165e989-1356-11ef-8e8f-9dd684e56d79", 00:19:06.707 "strip_size_kb": 64, 00:19:06.707 "state": "online", 00:19:06.707 "raid_level": "raid0", 00:19:06.707 "superblock": false, 00:19:06.707 "num_base_bdevs": 2, 00:19:06.707 "num_base_bdevs_discovered": 2, 00:19:06.707 "num_base_bdevs_operational": 2, 00:19:06.707 "base_bdevs_list": [ 00:19:06.707 { 00:19:06.707 "name": "BaseBdev1", 00:19:06.707 "uuid": "6008244a-1356-11ef-8e8f-9dd684e56d79", 00:19:06.707 "is_configured": true, 00:19:06.707 "data_offset": 0, 00:19:06.707 "data_size": 65536 00:19:06.707 }, 00:19:06.707 { 00:19:06.707 "name": "BaseBdev2", 00:19:06.707 "uuid": "6165e38b-1356-11ef-8e8f-9dd684e56d79", 00:19:06.707 "is_configured": true, 00:19:06.707 "data_offset": 0, 00:19:06.707 "data_size": 65536 00:19:06.707 } 00:19:06.707 ] 00:19:06.707 }' 00:19:06.707 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.707 07:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:06.965 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:07.224 [2024-05-16 07:32:00.707803] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.224 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:07.224 "name": "Existed_Raid", 00:19:07.224 "aliases": [ 00:19:07.224 "6165e989-1356-11ef-8e8f-9dd684e56d79" 00:19:07.224 ], 00:19:07.224 "product_name": "Raid Volume", 00:19:07.224 "block_size": 512, 00:19:07.224 "num_blocks": 131072, 00:19:07.224 "uuid": "6165e989-1356-11ef-8e8f-9dd684e56d79", 00:19:07.224 "assigned_rate_limits": { 00:19:07.224 "rw_ios_per_sec": 0, 00:19:07.224 "rw_mbytes_per_sec": 0, 00:19:07.224 "r_mbytes_per_sec": 0, 00:19:07.224 "w_mbytes_per_sec": 0 00:19:07.224 }, 00:19:07.224 "claimed": false, 00:19:07.224 "zoned": false, 00:19:07.224 "supported_io_types": { 00:19:07.224 "read": true, 00:19:07.224 "write": true, 00:19:07.224 "unmap": true, 00:19:07.224 "write_zeroes": true, 00:19:07.224 "flush": true, 00:19:07.224 "reset": true, 00:19:07.224 "compare": false, 00:19:07.224 "compare_and_write": false, 00:19:07.224 "abort": false, 00:19:07.224 "nvme_admin": false, 00:19:07.224 "nvme_io": false 00:19:07.224 }, 00:19:07.224 "memory_domains": [ 00:19:07.224 { 00:19:07.224 "dma_device_id": "system", 00:19:07.224 "dma_device_type": 1 00:19:07.224 }, 00:19:07.224 { 00:19:07.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.224 "dma_device_type": 2 00:19:07.224 }, 00:19:07.224 { 00:19:07.224 "dma_device_id": "system", 00:19:07.224 "dma_device_type": 1 00:19:07.224 }, 00:19:07.224 { 00:19:07.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.224 "dma_device_type": 2 00:19:07.224 } 00:19:07.224 ], 00:19:07.224 "driver_specific": { 00:19:07.224 "raid": { 00:19:07.224 "uuid": "6165e989-1356-11ef-8e8f-9dd684e56d79", 00:19:07.224 "strip_size_kb": 64, 00:19:07.225 "state": "online", 00:19:07.225 "raid_level": "raid0", 00:19:07.225 "superblock": false, 00:19:07.225 "num_base_bdevs": 2, 00:19:07.225 "num_base_bdevs_discovered": 2, 00:19:07.225 "num_base_bdevs_operational": 2, 00:19:07.225 "base_bdevs_list": [ 00:19:07.225 { 00:19:07.225 "name": "BaseBdev1", 00:19:07.225 "uuid": "6008244a-1356-11ef-8e8f-9dd684e56d79", 00:19:07.225 "is_configured": true, 00:19:07.225 "data_offset": 0, 00:19:07.225 "data_size": 65536 00:19:07.225 }, 00:19:07.225 { 00:19:07.225 "name": "BaseBdev2", 00:19:07.225 "uuid": "6165e38b-1356-11ef-8e8f-9dd684e56d79", 00:19:07.225 "is_configured": true, 00:19:07.225 "data_offset": 0, 00:19:07.225 "data_size": 65536 00:19:07.225 } 00:19:07.225 ] 00:19:07.225 } 00:19:07.225 } 00:19:07.225 }' 00:19:07.225 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:07.225 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:07.225 BaseBdev2' 00:19:07.225 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:07.225 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:07.225 07:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:07.484 "name": "BaseBdev1", 00:19:07.484 "aliases": [ 00:19:07.484 "6008244a-1356-11ef-8e8f-9dd684e56d79" 00:19:07.484 ], 00:19:07.484 "product_name": "Malloc disk", 00:19:07.484 "block_size": 512, 00:19:07.484 "num_blocks": 65536, 00:19:07.484 "uuid": "6008244a-1356-11ef-8e8f-9dd684e56d79", 00:19:07.484 "assigned_rate_limits": { 00:19:07.484 "rw_ios_per_sec": 0, 00:19:07.484 "rw_mbytes_per_sec": 0, 00:19:07.484 "r_mbytes_per_sec": 0, 00:19:07.484 "w_mbytes_per_sec": 0 00:19:07.484 }, 00:19:07.484 "claimed": true, 00:19:07.484 "claim_type": "exclusive_write", 00:19:07.484 "zoned": false, 00:19:07.484 "supported_io_types": { 00:19:07.484 "read": true, 00:19:07.484 "write": true, 00:19:07.484 "unmap": true, 00:19:07.484 "write_zeroes": true, 00:19:07.484 "flush": true, 00:19:07.484 "reset": true, 00:19:07.484 "compare": false, 00:19:07.484 "compare_and_write": false, 00:19:07.484 "abort": true, 00:19:07.484 "nvme_admin": false, 00:19:07.484 "nvme_io": false 00:19:07.484 }, 00:19:07.484 "memory_domains": [ 00:19:07.484 { 00:19:07.484 "dma_device_id": "system", 00:19:07.484 "dma_device_type": 1 00:19:07.484 }, 00:19:07.484 { 00:19:07.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.484 "dma_device_type": 2 00:19:07.484 } 00:19:07.484 ], 00:19:07.484 "driver_specific": {} 00:19:07.484 }' 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:07.484 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:07.742 "name": "BaseBdev2", 00:19:07.742 "aliases": [ 00:19:07.742 "6165e38b-1356-11ef-8e8f-9dd684e56d79" 00:19:07.742 ], 00:19:07.742 "product_name": "Malloc disk", 00:19:07.742 "block_size": 512, 00:19:07.742 "num_blocks": 65536, 00:19:07.742 "uuid": "6165e38b-1356-11ef-8e8f-9dd684e56d79", 00:19:07.742 "assigned_rate_limits": { 00:19:07.742 "rw_ios_per_sec": 0, 00:19:07.742 "rw_mbytes_per_sec": 0, 00:19:07.742 "r_mbytes_per_sec": 0, 00:19:07.742 "w_mbytes_per_sec": 0 00:19:07.742 }, 00:19:07.742 "claimed": true, 00:19:07.742 "claim_type": "exclusive_write", 00:19:07.742 "zoned": false, 00:19:07.742 "supported_io_types": { 00:19:07.742 "read": true, 00:19:07.742 "write": true, 00:19:07.742 "unmap": true, 00:19:07.742 "write_zeroes": true, 00:19:07.742 "flush": true, 00:19:07.742 "reset": true, 00:19:07.742 "compare": false, 00:19:07.742 "compare_and_write": false, 00:19:07.742 "abort": true, 00:19:07.742 "nvme_admin": false, 00:19:07.742 "nvme_io": false 00:19:07.742 }, 00:19:07.742 "memory_domains": [ 00:19:07.742 { 00:19:07.742 "dma_device_id": "system", 00:19:07.742 "dma_device_type": 1 00:19:07.742 }, 00:19:07.742 { 00:19:07.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.742 "dma_device_type": 2 00:19:07.742 } 00:19:07.742 ], 00:19:07.742 "driver_specific": {} 00:19:07.742 }' 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:07.742 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:08.001 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:08.260 [2024-05-16 07:32:01.583839] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.260 [2024-05-16 07:32:01.583868] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.260 [2024-05-16 07:32:01.583882] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.260 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.261 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.261 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.261 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.519 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.519 "name": "Existed_Raid", 00:19:08.519 "uuid": "6165e989-1356-11ef-8e8f-9dd684e56d79", 00:19:08.519 "strip_size_kb": 64, 00:19:08.519 "state": "offline", 00:19:08.519 "raid_level": "raid0", 00:19:08.519 "superblock": false, 00:19:08.519 "num_base_bdevs": 2, 00:19:08.519 "num_base_bdevs_discovered": 1, 00:19:08.519 "num_base_bdevs_operational": 1, 00:19:08.519 "base_bdevs_list": [ 00:19:08.519 { 00:19:08.519 "name": null, 00:19:08.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.519 "is_configured": false, 00:19:08.519 "data_offset": 0, 00:19:08.519 "data_size": 65536 00:19:08.519 }, 00:19:08.519 { 00:19:08.519 "name": "BaseBdev2", 00:19:08.519 "uuid": "6165e38b-1356-11ef-8e8f-9dd684e56d79", 00:19:08.519 "is_configured": true, 00:19:08.519 "data_offset": 0, 00:19:08.519 "data_size": 65536 00:19:08.519 } 00:19:08.519 ] 00:19:08.519 }' 00:19:08.519 07:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.519 07:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.778 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:08.778 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:08.778 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.778 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:09.037 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:09.037 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.037 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:09.295 [2024-05-16 07:32:02.724896] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.295 [2024-05-16 07:32:02.724954] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a85fa00 name Existed_Raid, state offline 00:19:09.296 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:09.296 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:09.296 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.296 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:09.554 07:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 49481 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 49481 ']' 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 49481 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 49481 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:09.554 killing process with pid 49481 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49481' 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 49481 00:19:09.554 [2024-05-16 07:32:03.013422] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:09.554 [2024-05-16 07:32:03.013481] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.554 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 49481 00:19:09.813 07:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:19:09.813 00:19:09.813 real 0m8.733s 00:19:09.814 user 0m15.231s 00:19:09.814 sys 0m1.498s 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.814 ************************************ 00:19:09.814 END TEST raid_state_function_test 00:19:09.814 ************************************ 00:19:09.814 07:32:03 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:19:09.814 07:32:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:09.814 07:32:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:09.814 07:32:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.814 ************************************ 00:19:09.814 START TEST raid_state_function_test_sb 00:19:09.814 ************************************ 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=49752 00:19:09.814 Process raid pid: 49752 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 49752' 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 49752 /var/tmp/spdk-raid.sock 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 49752 ']' 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:09.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:09.814 07:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.814 [2024-05-16 07:32:03.245726] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:09.814 [2024-05-16 07:32:03.245964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:10.378 EAL: TSC is not safe to use in SMP mode 00:19:10.378 EAL: TSC is not invariant 00:19:10.378 [2024-05-16 07:32:03.722908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.378 [2024-05-16 07:32:03.829457] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:10.378 [2024-05-16 07:32:03.832443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.378 [2024-05-16 07:32:03.833625] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.378 [2024-05-16 07:32:03.833650] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.969 07:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.969 07:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:10.969 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:11.228 [2024-05-16 07:32:04.604393] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.228 [2024-05-16 07:32:04.604453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.228 [2024-05-16 07:32:04.604459] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.228 [2024-05-16 07:32:04.604467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.228 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.229 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.485 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.485 "name": "Existed_Raid", 00:19:11.485 "uuid": "6480aad2-1356-11ef-8e8f-9dd684e56d79", 00:19:11.485 "strip_size_kb": 64, 00:19:11.485 "state": "configuring", 00:19:11.485 "raid_level": "raid0", 00:19:11.485 "superblock": true, 00:19:11.485 "num_base_bdevs": 2, 00:19:11.485 "num_base_bdevs_discovered": 0, 00:19:11.485 "num_base_bdevs_operational": 2, 00:19:11.485 "base_bdevs_list": [ 00:19:11.485 { 00:19:11.485 "name": "BaseBdev1", 00:19:11.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.485 "is_configured": false, 00:19:11.485 "data_offset": 0, 00:19:11.485 "data_size": 0 00:19:11.485 }, 00:19:11.485 { 00:19:11.485 "name": "BaseBdev2", 00:19:11.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.486 "is_configured": false, 00:19:11.486 "data_offset": 0, 00:19:11.486 "data_size": 0 00:19:11.486 } 00:19:11.486 ] 00:19:11.486 }' 00:19:11.486 07:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.486 07:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.743 07:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.004 [2024-05-16 07:32:05.536405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.004 [2024-05-16 07:32:05.536431] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d605500 name Existed_Raid, state configuring 00:19:12.004 07:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:12.572 [2024-05-16 07:32:05.876430] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.572 [2024-05-16 07:32:05.876479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.572 [2024-05-16 07:32:05.876484] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.572 [2024-05-16 07:32:05.876492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.572 07:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.572 [2024-05-16 07:32:06.101349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.572 BaseBdev1 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:12.572 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.831 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.090 [ 00:19:13.090 { 00:19:13.090 "name": "BaseBdev1", 00:19:13.090 "aliases": [ 00:19:13.090 "6564f1a4-1356-11ef-8e8f-9dd684e56d79" 00:19:13.090 ], 00:19:13.090 "product_name": "Malloc disk", 00:19:13.090 "block_size": 512, 00:19:13.090 "num_blocks": 65536, 00:19:13.090 "uuid": "6564f1a4-1356-11ef-8e8f-9dd684e56d79", 00:19:13.090 "assigned_rate_limits": { 00:19:13.090 "rw_ios_per_sec": 0, 00:19:13.090 "rw_mbytes_per_sec": 0, 00:19:13.090 "r_mbytes_per_sec": 0, 00:19:13.090 "w_mbytes_per_sec": 0 00:19:13.090 }, 00:19:13.090 "claimed": true, 00:19:13.090 "claim_type": "exclusive_write", 00:19:13.090 "zoned": false, 00:19:13.090 "supported_io_types": { 00:19:13.090 "read": true, 00:19:13.090 "write": true, 00:19:13.090 "unmap": true, 00:19:13.090 "write_zeroes": true, 00:19:13.090 "flush": true, 00:19:13.090 "reset": true, 00:19:13.090 "compare": false, 00:19:13.090 "compare_and_write": false, 00:19:13.090 "abort": true, 00:19:13.090 "nvme_admin": false, 00:19:13.090 "nvme_io": false 00:19:13.090 }, 00:19:13.090 "memory_domains": [ 00:19:13.090 { 00:19:13.090 "dma_device_id": "system", 00:19:13.090 "dma_device_type": 1 00:19:13.090 }, 00:19:13.090 { 00:19:13.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.090 "dma_device_type": 2 00:19:13.090 } 00:19:13.090 ], 00:19:13.090 "driver_specific": {} 00:19:13.090 } 00:19:13.090 ] 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.090 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.350 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.350 "name": "Existed_Raid", 00:19:13.350 "uuid": "6542c3c6-1356-11ef-8e8f-9dd684e56d79", 00:19:13.350 "strip_size_kb": 64, 00:19:13.350 "state": "configuring", 00:19:13.350 "raid_level": "raid0", 00:19:13.350 "superblock": true, 00:19:13.350 "num_base_bdevs": 2, 00:19:13.350 "num_base_bdevs_discovered": 1, 00:19:13.350 "num_base_bdevs_operational": 2, 00:19:13.350 "base_bdevs_list": [ 00:19:13.350 { 00:19:13.350 "name": "BaseBdev1", 00:19:13.350 "uuid": "6564f1a4-1356-11ef-8e8f-9dd684e56d79", 00:19:13.350 "is_configured": true, 00:19:13.350 "data_offset": 2048, 00:19:13.350 "data_size": 63488 00:19:13.350 }, 00:19:13.350 { 00:19:13.350 "name": "BaseBdev2", 00:19:13.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.350 "is_configured": false, 00:19:13.350 "data_offset": 0, 00:19:13.350 "data_size": 0 00:19:13.350 } 00:19:13.350 ] 00:19:13.350 }' 00:19:13.350 07:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.350 07:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.621 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:13.880 [2024-05-16 07:32:07.336456] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.880 [2024-05-16 07:32:07.336493] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d605500 name Existed_Raid, state configuring 00:19:13.880 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:14.138 [2024-05-16 07:32:07.564483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.138 [2024-05-16 07:32:07.565210] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.138 [2024-05-16 07:32:07.565259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.138 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.397 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.397 "name": "Existed_Raid", 00:19:14.397 "uuid": "66445729-1356-11ef-8e8f-9dd684e56d79", 00:19:14.397 "strip_size_kb": 64, 00:19:14.397 "state": "configuring", 00:19:14.397 "raid_level": "raid0", 00:19:14.397 "superblock": true, 00:19:14.397 "num_base_bdevs": 2, 00:19:14.397 "num_base_bdevs_discovered": 1, 00:19:14.397 "num_base_bdevs_operational": 2, 00:19:14.397 "base_bdevs_list": [ 00:19:14.397 { 00:19:14.397 "name": "BaseBdev1", 00:19:14.397 "uuid": "6564f1a4-1356-11ef-8e8f-9dd684e56d79", 00:19:14.397 "is_configured": true, 00:19:14.397 "data_offset": 2048, 00:19:14.397 "data_size": 63488 00:19:14.397 }, 00:19:14.397 { 00:19:14.397 "name": "BaseBdev2", 00:19:14.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.397 "is_configured": false, 00:19:14.397 "data_offset": 0, 00:19:14.397 "data_size": 0 00:19:14.397 } 00:19:14.397 ] 00:19:14.397 }' 00:19:14.397 07:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.397 07:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.655 07:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:14.913 [2024-05-16 07:32:08.368608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.913 [2024-05-16 07:32:08.368689] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d605a00 00:19:14.913 [2024-05-16 07:32:08.368694] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:14.913 [2024-05-16 07:32:08.368713] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d668ec0 00:19:14.913 [2024-05-16 07:32:08.368746] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d605a00 00:19:14.913 [2024-05-16 07:32:08.368750] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d605a00 00:19:14.913 [2024-05-16 07:32:08.368766] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.913 BaseBdev2 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:14.913 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.171 07:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:15.739 [ 00:19:15.739 { 00:19:15.739 "name": "BaseBdev2", 00:19:15.739 "aliases": [ 00:19:15.739 "66bf062a-1356-11ef-8e8f-9dd684e56d79" 00:19:15.739 ], 00:19:15.739 "product_name": "Malloc disk", 00:19:15.739 "block_size": 512, 00:19:15.739 "num_blocks": 65536, 00:19:15.739 "uuid": "66bf062a-1356-11ef-8e8f-9dd684e56d79", 00:19:15.739 "assigned_rate_limits": { 00:19:15.739 "rw_ios_per_sec": 0, 00:19:15.739 "rw_mbytes_per_sec": 0, 00:19:15.739 "r_mbytes_per_sec": 0, 00:19:15.739 "w_mbytes_per_sec": 0 00:19:15.739 }, 00:19:15.739 "claimed": true, 00:19:15.739 "claim_type": "exclusive_write", 00:19:15.739 "zoned": false, 00:19:15.739 "supported_io_types": { 00:19:15.739 "read": true, 00:19:15.739 "write": true, 00:19:15.739 "unmap": true, 00:19:15.739 "write_zeroes": true, 00:19:15.739 "flush": true, 00:19:15.739 "reset": true, 00:19:15.739 "compare": false, 00:19:15.739 "compare_and_write": false, 00:19:15.739 "abort": true, 00:19:15.739 "nvme_admin": false, 00:19:15.739 "nvme_io": false 00:19:15.739 }, 00:19:15.739 "memory_domains": [ 00:19:15.739 { 00:19:15.739 "dma_device_id": "system", 00:19:15.739 "dma_device_type": 1 00:19:15.739 }, 00:19:15.739 { 00:19:15.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.739 "dma_device_type": 2 00:19:15.739 } 00:19:15.739 ], 00:19:15.739 "driver_specific": {} 00:19:15.739 } 00:19:15.739 ] 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.739 "name": "Existed_Raid", 00:19:15.739 "uuid": "66445729-1356-11ef-8e8f-9dd684e56d79", 00:19:15.739 "strip_size_kb": 64, 00:19:15.739 "state": "online", 00:19:15.739 "raid_level": "raid0", 00:19:15.739 "superblock": true, 00:19:15.739 "num_base_bdevs": 2, 00:19:15.739 "num_base_bdevs_discovered": 2, 00:19:15.739 "num_base_bdevs_operational": 2, 00:19:15.739 "base_bdevs_list": [ 00:19:15.739 { 00:19:15.739 "name": "BaseBdev1", 00:19:15.739 "uuid": "6564f1a4-1356-11ef-8e8f-9dd684e56d79", 00:19:15.739 "is_configured": true, 00:19:15.739 "data_offset": 2048, 00:19:15.739 "data_size": 63488 00:19:15.739 }, 00:19:15.739 { 00:19:15.739 "name": "BaseBdev2", 00:19:15.739 "uuid": "66bf062a-1356-11ef-8e8f-9dd684e56d79", 00:19:15.739 "is_configured": true, 00:19:15.739 "data_offset": 2048, 00:19:15.739 "data_size": 63488 00:19:15.739 } 00:19:15.739 ] 00:19:15.739 }' 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.739 07:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:16.307 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:16.566 [2024-05-16 07:32:09.936609] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.566 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:16.566 "name": "Existed_Raid", 00:19:16.566 "aliases": [ 00:19:16.566 "66445729-1356-11ef-8e8f-9dd684e56d79" 00:19:16.566 ], 00:19:16.566 "product_name": "Raid Volume", 00:19:16.566 "block_size": 512, 00:19:16.566 "num_blocks": 126976, 00:19:16.566 "uuid": "66445729-1356-11ef-8e8f-9dd684e56d79", 00:19:16.566 "assigned_rate_limits": { 00:19:16.566 "rw_ios_per_sec": 0, 00:19:16.566 "rw_mbytes_per_sec": 0, 00:19:16.566 "r_mbytes_per_sec": 0, 00:19:16.566 "w_mbytes_per_sec": 0 00:19:16.566 }, 00:19:16.566 "claimed": false, 00:19:16.566 "zoned": false, 00:19:16.566 "supported_io_types": { 00:19:16.566 "read": true, 00:19:16.566 "write": true, 00:19:16.566 "unmap": true, 00:19:16.566 "write_zeroes": true, 00:19:16.566 "flush": true, 00:19:16.566 "reset": true, 00:19:16.566 "compare": false, 00:19:16.566 "compare_and_write": false, 00:19:16.566 "abort": false, 00:19:16.566 "nvme_admin": false, 00:19:16.566 "nvme_io": false 00:19:16.566 }, 00:19:16.566 "memory_domains": [ 00:19:16.566 { 00:19:16.566 "dma_device_id": "system", 00:19:16.566 "dma_device_type": 1 00:19:16.566 }, 00:19:16.566 { 00:19:16.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.566 "dma_device_type": 2 00:19:16.566 }, 00:19:16.566 { 00:19:16.566 "dma_device_id": "system", 00:19:16.566 "dma_device_type": 1 00:19:16.566 }, 00:19:16.566 { 00:19:16.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.566 "dma_device_type": 2 00:19:16.566 } 00:19:16.566 ], 00:19:16.566 "driver_specific": { 00:19:16.566 "raid": { 00:19:16.566 "uuid": "66445729-1356-11ef-8e8f-9dd684e56d79", 00:19:16.566 "strip_size_kb": 64, 00:19:16.566 "state": "online", 00:19:16.566 "raid_level": "raid0", 00:19:16.566 "superblock": true, 00:19:16.566 "num_base_bdevs": 2, 00:19:16.566 "num_base_bdevs_discovered": 2, 00:19:16.566 "num_base_bdevs_operational": 2, 00:19:16.566 "base_bdevs_list": [ 00:19:16.566 { 00:19:16.566 "name": "BaseBdev1", 00:19:16.566 "uuid": "6564f1a4-1356-11ef-8e8f-9dd684e56d79", 00:19:16.566 "is_configured": true, 00:19:16.566 "data_offset": 2048, 00:19:16.566 "data_size": 63488 00:19:16.566 }, 00:19:16.566 { 00:19:16.566 "name": "BaseBdev2", 00:19:16.566 "uuid": "66bf062a-1356-11ef-8e8f-9dd684e56d79", 00:19:16.566 "is_configured": true, 00:19:16.566 "data_offset": 2048, 00:19:16.566 "data_size": 63488 00:19:16.566 } 00:19:16.566 ] 00:19:16.566 } 00:19:16.566 } 00:19:16.566 }' 00:19:16.566 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:16.566 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:16.566 BaseBdev2' 00:19:16.566 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:16.566 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:16.566 07:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:16.824 "name": "BaseBdev1", 00:19:16.824 "aliases": [ 00:19:16.824 "6564f1a4-1356-11ef-8e8f-9dd684e56d79" 00:19:16.824 ], 00:19:16.824 "product_name": "Malloc disk", 00:19:16.824 "block_size": 512, 00:19:16.824 "num_blocks": 65536, 00:19:16.824 "uuid": "6564f1a4-1356-11ef-8e8f-9dd684e56d79", 00:19:16.824 "assigned_rate_limits": { 00:19:16.824 "rw_ios_per_sec": 0, 00:19:16.824 "rw_mbytes_per_sec": 0, 00:19:16.824 "r_mbytes_per_sec": 0, 00:19:16.824 "w_mbytes_per_sec": 0 00:19:16.824 }, 00:19:16.824 "claimed": true, 00:19:16.824 "claim_type": "exclusive_write", 00:19:16.824 "zoned": false, 00:19:16.824 "supported_io_types": { 00:19:16.824 "read": true, 00:19:16.824 "write": true, 00:19:16.824 "unmap": true, 00:19:16.824 "write_zeroes": true, 00:19:16.824 "flush": true, 00:19:16.824 "reset": true, 00:19:16.824 "compare": false, 00:19:16.824 "compare_and_write": false, 00:19:16.824 "abort": true, 00:19:16.824 "nvme_admin": false, 00:19:16.824 "nvme_io": false 00:19:16.824 }, 00:19:16.824 "memory_domains": [ 00:19:16.824 { 00:19:16.824 "dma_device_id": "system", 00:19:16.824 "dma_device_type": 1 00:19:16.824 }, 00:19:16.824 { 00:19:16.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.824 "dma_device_type": 2 00:19:16.824 } 00:19:16.824 ], 00:19:16.824 "driver_specific": {} 00:19:16.824 }' 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:16.824 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:16.825 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:16.825 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:16.825 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:16.825 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:16.825 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:16.825 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:17.083 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:17.083 "name": "BaseBdev2", 00:19:17.083 "aliases": [ 00:19:17.083 "66bf062a-1356-11ef-8e8f-9dd684e56d79" 00:19:17.083 ], 00:19:17.083 "product_name": "Malloc disk", 00:19:17.083 "block_size": 512, 00:19:17.083 "num_blocks": 65536, 00:19:17.083 "uuid": "66bf062a-1356-11ef-8e8f-9dd684e56d79", 00:19:17.083 "assigned_rate_limits": { 00:19:17.083 "rw_ios_per_sec": 0, 00:19:17.083 "rw_mbytes_per_sec": 0, 00:19:17.083 "r_mbytes_per_sec": 0, 00:19:17.083 "w_mbytes_per_sec": 0 00:19:17.083 }, 00:19:17.083 "claimed": true, 00:19:17.083 "claim_type": "exclusive_write", 00:19:17.083 "zoned": false, 00:19:17.083 "supported_io_types": { 00:19:17.083 "read": true, 00:19:17.083 "write": true, 00:19:17.083 "unmap": true, 00:19:17.083 "write_zeroes": true, 00:19:17.083 "flush": true, 00:19:17.083 "reset": true, 00:19:17.084 "compare": false, 00:19:17.084 "compare_and_write": false, 00:19:17.084 "abort": true, 00:19:17.084 "nvme_admin": false, 00:19:17.084 "nvme_io": false 00:19:17.084 }, 00:19:17.084 "memory_domains": [ 00:19:17.084 { 00:19:17.084 "dma_device_id": "system", 00:19:17.084 "dma_device_type": 1 00:19:17.084 }, 00:19:17.084 { 00:19:17.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.084 "dma_device_type": 2 00:19:17.084 } 00:19:17.084 ], 00:19:17.084 "driver_specific": {} 00:19:17.084 }' 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:17.084 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.343 [2024-05-16 07:32:10.832636] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.343 [2024-05-16 07:32:10.832670] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.343 [2024-05-16 07:32:10.832690] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.343 07:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.601 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.601 "name": "Existed_Raid", 00:19:17.601 "uuid": "66445729-1356-11ef-8e8f-9dd684e56d79", 00:19:17.601 "strip_size_kb": 64, 00:19:17.601 "state": "offline", 00:19:17.601 "raid_level": "raid0", 00:19:17.601 "superblock": true, 00:19:17.601 "num_base_bdevs": 2, 00:19:17.601 "num_base_bdevs_discovered": 1, 00:19:17.601 "num_base_bdevs_operational": 1, 00:19:17.601 "base_bdevs_list": [ 00:19:17.601 { 00:19:17.601 "name": null, 00:19:17.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.601 "is_configured": false, 00:19:17.601 "data_offset": 2048, 00:19:17.601 "data_size": 63488 00:19:17.601 }, 00:19:17.601 { 00:19:17.601 "name": "BaseBdev2", 00:19:17.601 "uuid": "66bf062a-1356-11ef-8e8f-9dd684e56d79", 00:19:17.601 "is_configured": true, 00:19:17.601 "data_offset": 2048, 00:19:17.601 "data_size": 63488 00:19:17.601 } 00:19:17.601 ] 00:19:17.601 }' 00:19:17.601 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.601 07:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.168 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:18.168 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:18.168 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.168 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:18.427 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:18.427 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.427 07:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.685 [2024-05-16 07:32:12.037965] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.685 [2024-05-16 07:32:12.038002] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d605a00 name Existed_Raid, state offline 00:19:18.685 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:18.685 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:18.685 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.685 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:18.943 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:18.943 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:18.943 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:19:18.943 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 49752 00:19:18.943 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 49752 ']' 00:19:18.943 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 49752 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 49752 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:18.944 killing process with pid 49752 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 49752' 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 49752 00:19:18.944 [2024-05-16 07:32:12.320263] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:18.944 [2024-05-16 07:32:12.320326] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.944 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 49752 00:19:19.201 07:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:19:19.201 00:19:19.201 real 0m9.282s 00:19:19.201 user 0m16.323s 00:19:19.201 sys 0m1.511s 00:19:19.201 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:19.201 07:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.201 ************************************ 00:19:19.201 END TEST raid_state_function_test_sb 00:19:19.201 ************************************ 00:19:19.201 07:32:12 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:19:19.201 07:32:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:19.201 07:32:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:19.201 07:32:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.201 ************************************ 00:19:19.201 START TEST raid_superblock_test 00:19:19.201 ************************************ 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=50026 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 50026 /var/tmp/spdk-raid.sock 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 50026 ']' 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.201 07:32:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.201 [2024-05-16 07:32:12.564692] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:19.201 [2024-05-16 07:32:12.564867] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:19.799 EAL: TSC is not safe to use in SMP mode 00:19:19.799 EAL: TSC is not invariant 00:19:19.799 [2024-05-16 07:32:13.036229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.799 [2024-05-16 07:32:13.124351] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:19.799 [2024-05-16 07:32:13.126580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.799 [2024-05-16 07:32:13.127313] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.799 [2024-05-16 07:32:13.127327] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:20.399 malloc1 00:19:20.399 07:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:20.659 [2024-05-16 07:32:14.206830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:20.659 [2024-05-16 07:32:14.206903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.659 [2024-05-16 07:32:14.207487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x828dd8780 00:19:20.659 [2024-05-16 07:32:14.207517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.659 [2024-05-16 07:32:14.208395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.659 [2024-05-16 07:32:14.208445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:20.659 pt1 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:20.984 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:21.242 malloc2 00:19:21.242 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.500 [2024-05-16 07:32:14.810854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.500 [2024-05-16 07:32:14.810938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.500 [2024-05-16 07:32:14.810966] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x828dd8c80 00:19:21.501 [2024-05-16 07:32:14.810987] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.501 [2024-05-16 07:32:14.811528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.501 [2024-05-16 07:32:14.811551] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.501 pt2 00:19:21.501 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:21.501 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:21.501 07:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:19:21.863 [2024-05-16 07:32:15.098862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:21.863 [2024-05-16 07:32:15.099321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.863 [2024-05-16 07:32:15.099370] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x828dd8f00 00:19:21.863 [2024-05-16 07:32:15.099376] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:21.863 [2024-05-16 07:32:15.099418] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x828e3be20 00:19:21.863 [2024-05-16 07:32:15.099478] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x828dd8f00 00:19:21.863 [2024-05-16 07:32:15.099482] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x828dd8f00 00:19:21.863 [2024-05-16 07:32:15.099504] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.863 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.141 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.141 "name": "raid_bdev1", 00:19:22.141 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:22.141 "strip_size_kb": 64, 00:19:22.141 "state": "online", 00:19:22.141 "raid_level": "raid0", 00:19:22.141 "superblock": true, 00:19:22.141 "num_base_bdevs": 2, 00:19:22.141 "num_base_bdevs_discovered": 2, 00:19:22.141 "num_base_bdevs_operational": 2, 00:19:22.141 "base_bdevs_list": [ 00:19:22.141 { 00:19:22.141 "name": "pt1", 00:19:22.141 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:22.141 "is_configured": true, 00:19:22.141 "data_offset": 2048, 00:19:22.141 "data_size": 63488 00:19:22.141 }, 00:19:22.141 { 00:19:22.141 "name": "pt2", 00:19:22.141 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:22.141 "is_configured": true, 00:19:22.141 "data_offset": 2048, 00:19:22.141 "data_size": 63488 00:19:22.141 } 00:19:22.141 ] 00:19:22.141 }' 00:19:22.141 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.141 07:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.400 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:22.400 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:22.400 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:22.400 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:22.400 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:22.401 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:22.401 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:22.401 07:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:22.659 [2024-05-16 07:32:16.050960] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.659 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:22.659 "name": "raid_bdev1", 00:19:22.659 "aliases": [ 00:19:22.659 "6ac1fed6-1356-11ef-8e8f-9dd684e56d79" 00:19:22.659 ], 00:19:22.659 "product_name": "Raid Volume", 00:19:22.659 "block_size": 512, 00:19:22.659 "num_blocks": 126976, 00:19:22.659 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:22.659 "assigned_rate_limits": { 00:19:22.659 "rw_ios_per_sec": 0, 00:19:22.659 "rw_mbytes_per_sec": 0, 00:19:22.659 "r_mbytes_per_sec": 0, 00:19:22.659 "w_mbytes_per_sec": 0 00:19:22.659 }, 00:19:22.659 "claimed": false, 00:19:22.659 "zoned": false, 00:19:22.659 "supported_io_types": { 00:19:22.659 "read": true, 00:19:22.659 "write": true, 00:19:22.659 "unmap": true, 00:19:22.659 "write_zeroes": true, 00:19:22.659 "flush": true, 00:19:22.659 "reset": true, 00:19:22.659 "compare": false, 00:19:22.659 "compare_and_write": false, 00:19:22.659 "abort": false, 00:19:22.659 "nvme_admin": false, 00:19:22.659 "nvme_io": false 00:19:22.659 }, 00:19:22.659 "memory_domains": [ 00:19:22.659 { 00:19:22.659 "dma_device_id": "system", 00:19:22.659 "dma_device_type": 1 00:19:22.659 }, 00:19:22.659 { 00:19:22.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.659 "dma_device_type": 2 00:19:22.659 }, 00:19:22.659 { 00:19:22.659 "dma_device_id": "system", 00:19:22.659 "dma_device_type": 1 00:19:22.659 }, 00:19:22.659 { 00:19:22.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.659 "dma_device_type": 2 00:19:22.659 } 00:19:22.659 ], 00:19:22.659 "driver_specific": { 00:19:22.659 "raid": { 00:19:22.659 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:22.659 "strip_size_kb": 64, 00:19:22.659 "state": "online", 00:19:22.659 "raid_level": "raid0", 00:19:22.659 "superblock": true, 00:19:22.659 "num_base_bdevs": 2, 00:19:22.659 "num_base_bdevs_discovered": 2, 00:19:22.659 "num_base_bdevs_operational": 2, 00:19:22.659 "base_bdevs_list": [ 00:19:22.659 { 00:19:22.659 "name": "pt1", 00:19:22.659 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:22.659 "is_configured": true, 00:19:22.659 "data_offset": 2048, 00:19:22.659 "data_size": 63488 00:19:22.659 }, 00:19:22.659 { 00:19:22.659 "name": "pt2", 00:19:22.659 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:22.659 "is_configured": true, 00:19:22.659 "data_offset": 2048, 00:19:22.659 "data_size": 63488 00:19:22.659 } 00:19:22.659 ] 00:19:22.659 } 00:19:22.659 } 00:19:22.659 }' 00:19:22.659 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:22.659 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:22.659 pt2' 00:19:22.659 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:22.659 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:22.659 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:22.918 "name": "pt1", 00:19:22.918 "aliases": [ 00:19:22.918 "4da5f337-16a1-7859-9896-8b89be47533a" 00:19:22.918 ], 00:19:22.918 "product_name": "passthru", 00:19:22.918 "block_size": 512, 00:19:22.918 "num_blocks": 65536, 00:19:22.918 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:22.918 "assigned_rate_limits": { 00:19:22.918 "rw_ios_per_sec": 0, 00:19:22.918 "rw_mbytes_per_sec": 0, 00:19:22.918 "r_mbytes_per_sec": 0, 00:19:22.918 "w_mbytes_per_sec": 0 00:19:22.918 }, 00:19:22.918 "claimed": true, 00:19:22.918 "claim_type": "exclusive_write", 00:19:22.918 "zoned": false, 00:19:22.918 "supported_io_types": { 00:19:22.918 "read": true, 00:19:22.918 "write": true, 00:19:22.918 "unmap": true, 00:19:22.918 "write_zeroes": true, 00:19:22.918 "flush": true, 00:19:22.918 "reset": true, 00:19:22.918 "compare": false, 00:19:22.918 "compare_and_write": false, 00:19:22.918 "abort": true, 00:19:22.918 "nvme_admin": false, 00:19:22.918 "nvme_io": false 00:19:22.918 }, 00:19:22.918 "memory_domains": [ 00:19:22.918 { 00:19:22.918 "dma_device_id": "system", 00:19:22.918 "dma_device_type": 1 00:19:22.918 }, 00:19:22.918 { 00:19:22.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.918 "dma_device_type": 2 00:19:22.918 } 00:19:22.918 ], 00:19:22.918 "driver_specific": { 00:19:22.918 "passthru": { 00:19:22.918 "name": "pt1", 00:19:22.918 "base_bdev_name": "malloc1" 00:19:22.918 } 00:19:22.918 } 00:19:22.918 }' 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:22.918 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:23.176 "name": "pt2", 00:19:23.176 "aliases": [ 00:19:23.176 "90be89df-68a7-2952-9dcf-39f9deff7ad9" 00:19:23.176 ], 00:19:23.176 "product_name": "passthru", 00:19:23.176 "block_size": 512, 00:19:23.176 "num_blocks": 65536, 00:19:23.176 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:23.176 "assigned_rate_limits": { 00:19:23.176 "rw_ios_per_sec": 0, 00:19:23.176 "rw_mbytes_per_sec": 0, 00:19:23.176 "r_mbytes_per_sec": 0, 00:19:23.176 "w_mbytes_per_sec": 0 00:19:23.176 }, 00:19:23.176 "claimed": true, 00:19:23.176 "claim_type": "exclusive_write", 00:19:23.176 "zoned": false, 00:19:23.176 "supported_io_types": { 00:19:23.176 "read": true, 00:19:23.176 "write": true, 00:19:23.176 "unmap": true, 00:19:23.176 "write_zeroes": true, 00:19:23.176 "flush": true, 00:19:23.176 "reset": true, 00:19:23.176 "compare": false, 00:19:23.176 "compare_and_write": false, 00:19:23.176 "abort": true, 00:19:23.176 "nvme_admin": false, 00:19:23.176 "nvme_io": false 00:19:23.176 }, 00:19:23.176 "memory_domains": [ 00:19:23.176 { 00:19:23.176 "dma_device_id": "system", 00:19:23.176 "dma_device_type": 1 00:19:23.176 }, 00:19:23.176 { 00:19:23.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.176 "dma_device_type": 2 00:19:23.176 } 00:19:23.176 ], 00:19:23.176 "driver_specific": { 00:19:23.176 "passthru": { 00:19:23.176 "name": "pt2", 00:19:23.176 "base_bdev_name": "malloc2" 00:19:23.176 } 00:19:23.176 } 00:19:23.176 }' 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:23.176 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:23.434 [2024-05-16 07:32:16.942968] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.434 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6ac1fed6-1356-11ef-8e8f-9dd684e56d79 00:19:23.434 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6ac1fed6-1356-11ef-8e8f-9dd684e56d79 ']' 00:19:23.434 07:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:23.693 [2024-05-16 07:32:17.170935] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.693 [2024-05-16 07:32:17.170961] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.693 [2024-05-16 07:32:17.170984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.693 [2024-05-16 07:32:17.170995] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.693 [2024-05-16 07:32:17.170999] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828dd8f00 name raid_bdev1, state offline 00:19:23.693 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.693 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:23.951 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:23.951 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:23.951 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:23.951 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:24.210 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.210 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:24.468 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:24.468 07:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:24.728 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:19:24.986 [2024-05-16 07:32:18.450973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:24.986 [2024-05-16 07:32:18.451420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:24.986 [2024-05-16 07:32:18.451442] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:24.986 [2024-05-16 07:32:18.451478] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:24.986 [2024-05-16 07:32:18.451487] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.986 [2024-05-16 07:32:18.451492] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828dd8c80 name raid_bdev1, state configuring 00:19:24.986 request: 00:19:24.986 { 00:19:24.986 "name": "raid_bdev1", 00:19:24.986 "raid_level": "raid0", 00:19:24.986 "base_bdevs": [ 00:19:24.986 "malloc1", 00:19:24.986 "malloc2" 00:19:24.986 ], 00:19:24.986 "superblock": false, 00:19:24.986 "strip_size_kb": 64, 00:19:24.986 "method": "bdev_raid_create", 00:19:24.986 "req_id": 1 00:19:24.986 } 00:19:24.986 Got JSON-RPC error response 00:19:24.986 response: 00:19:24.986 { 00:19:24.986 "code": -17, 00:19:24.986 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:24.986 } 00:19:24.986 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:24.986 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.986 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:24.986 07:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.986 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.986 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:25.243 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:25.243 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:25.243 07:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:25.501 [2024-05-16 07:32:19.059001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:25.501 [2024-05-16 07:32:19.059070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.501 [2024-05-16 07:32:19.059096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x828dd8780 00:19:25.501 [2024-05-16 07:32:19.059103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.501 [2024-05-16 07:32:19.059592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.501 [2024-05-16 07:32:19.059613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:25.501 [2024-05-16 07:32:19.059651] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:25.501 [2024-05-16 07:32:19.059661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:25.501 pt1 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.759 "name": "raid_bdev1", 00:19:25.759 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:25.759 "strip_size_kb": 64, 00:19:25.759 "state": "configuring", 00:19:25.759 "raid_level": "raid0", 00:19:25.759 "superblock": true, 00:19:25.759 "num_base_bdevs": 2, 00:19:25.759 "num_base_bdevs_discovered": 1, 00:19:25.759 "num_base_bdevs_operational": 2, 00:19:25.759 "base_bdevs_list": [ 00:19:25.759 { 00:19:25.759 "name": "pt1", 00:19:25.759 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:25.759 "is_configured": true, 00:19:25.759 "data_offset": 2048, 00:19:25.759 "data_size": 63488 00:19:25.759 }, 00:19:25.759 { 00:19:25.759 "name": null, 00:19:25.759 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:25.759 "is_configured": false, 00:19:25.759 "data_offset": 2048, 00:19:25.759 "data_size": 63488 00:19:25.759 } 00:19:25.759 ] 00:19:25.759 }' 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.759 07:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.326 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:26.326 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:26.326 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.326 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:26.585 [2024-05-16 07:32:19.899028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:26.585 [2024-05-16 07:32:19.899087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.585 [2024-05-16 07:32:19.899113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x828dd8f00 00:19:26.585 [2024-05-16 07:32:19.899120] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.585 [2024-05-16 07:32:19.899213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.585 [2024-05-16 07:32:19.899221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:26.585 [2024-05-16 07:32:19.899241] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:26.585 [2024-05-16 07:32:19.899248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:26.585 [2024-05-16 07:32:19.899269] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x828dd9180 00:19:26.585 [2024-05-16 07:32:19.899272] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:26.585 [2024-05-16 07:32:19.899289] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x828e3be20 00:19:26.585 [2024-05-16 07:32:19.899327] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x828dd9180 00:19:26.585 [2024-05-16 07:32:19.899330] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x828dd9180 00:19:26.585 [2024-05-16 07:32:19.899346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.585 pt2 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.585 07:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.848 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.848 "name": "raid_bdev1", 00:19:26.848 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:26.848 "strip_size_kb": 64, 00:19:26.848 "state": "online", 00:19:26.848 "raid_level": "raid0", 00:19:26.848 "superblock": true, 00:19:26.848 "num_base_bdevs": 2, 00:19:26.848 "num_base_bdevs_discovered": 2, 00:19:26.848 "num_base_bdevs_operational": 2, 00:19:26.848 "base_bdevs_list": [ 00:19:26.848 { 00:19:26.848 "name": "pt1", 00:19:26.848 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:26.848 "is_configured": true, 00:19:26.848 "data_offset": 2048, 00:19:26.848 "data_size": 63488 00:19:26.848 }, 00:19:26.848 { 00:19:26.848 "name": "pt2", 00:19:26.848 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:26.848 "is_configured": true, 00:19:26.848 "data_offset": 2048, 00:19:26.848 "data_size": 63488 00:19:26.848 } 00:19:26.848 ] 00:19:26.848 }' 00:19:26.848 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.848 07:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:27.107 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:27.365 [2024-05-16 07:32:20.763096] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.365 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:27.365 "name": "raid_bdev1", 00:19:27.365 "aliases": [ 00:19:27.365 "6ac1fed6-1356-11ef-8e8f-9dd684e56d79" 00:19:27.365 ], 00:19:27.365 "product_name": "Raid Volume", 00:19:27.365 "block_size": 512, 00:19:27.365 "num_blocks": 126976, 00:19:27.365 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:27.365 "assigned_rate_limits": { 00:19:27.365 "rw_ios_per_sec": 0, 00:19:27.365 "rw_mbytes_per_sec": 0, 00:19:27.365 "r_mbytes_per_sec": 0, 00:19:27.365 "w_mbytes_per_sec": 0 00:19:27.365 }, 00:19:27.365 "claimed": false, 00:19:27.365 "zoned": false, 00:19:27.365 "supported_io_types": { 00:19:27.365 "read": true, 00:19:27.365 "write": true, 00:19:27.365 "unmap": true, 00:19:27.365 "write_zeroes": true, 00:19:27.365 "flush": true, 00:19:27.365 "reset": true, 00:19:27.365 "compare": false, 00:19:27.365 "compare_and_write": false, 00:19:27.365 "abort": false, 00:19:27.365 "nvme_admin": false, 00:19:27.365 "nvme_io": false 00:19:27.365 }, 00:19:27.365 "memory_domains": [ 00:19:27.365 { 00:19:27.365 "dma_device_id": "system", 00:19:27.365 "dma_device_type": 1 00:19:27.365 }, 00:19:27.365 { 00:19:27.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.365 "dma_device_type": 2 00:19:27.365 }, 00:19:27.365 { 00:19:27.365 "dma_device_id": "system", 00:19:27.365 "dma_device_type": 1 00:19:27.365 }, 00:19:27.365 { 00:19:27.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.365 "dma_device_type": 2 00:19:27.365 } 00:19:27.365 ], 00:19:27.365 "driver_specific": { 00:19:27.365 "raid": { 00:19:27.365 "uuid": "6ac1fed6-1356-11ef-8e8f-9dd684e56d79", 00:19:27.365 "strip_size_kb": 64, 00:19:27.365 "state": "online", 00:19:27.365 "raid_level": "raid0", 00:19:27.365 "superblock": true, 00:19:27.365 "num_base_bdevs": 2, 00:19:27.365 "num_base_bdevs_discovered": 2, 00:19:27.365 "num_base_bdevs_operational": 2, 00:19:27.365 "base_bdevs_list": [ 00:19:27.365 { 00:19:27.365 "name": "pt1", 00:19:27.365 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:27.365 "is_configured": true, 00:19:27.365 "data_offset": 2048, 00:19:27.365 "data_size": 63488 00:19:27.365 }, 00:19:27.365 { 00:19:27.365 "name": "pt2", 00:19:27.365 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:27.365 "is_configured": true, 00:19:27.365 "data_offset": 2048, 00:19:27.365 "data_size": 63488 00:19:27.365 } 00:19:27.365 ] 00:19:27.365 } 00:19:27.365 } 00:19:27.365 }' 00:19:27.365 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.365 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:27.365 pt2' 00:19:27.365 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:27.365 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:27.365 07:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:27.623 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:27.623 "name": "pt1", 00:19:27.624 "aliases": [ 00:19:27.624 "4da5f337-16a1-7859-9896-8b89be47533a" 00:19:27.624 ], 00:19:27.624 "product_name": "passthru", 00:19:27.624 "block_size": 512, 00:19:27.624 "num_blocks": 65536, 00:19:27.624 "uuid": "4da5f337-16a1-7859-9896-8b89be47533a", 00:19:27.624 "assigned_rate_limits": { 00:19:27.624 "rw_ios_per_sec": 0, 00:19:27.624 "rw_mbytes_per_sec": 0, 00:19:27.624 "r_mbytes_per_sec": 0, 00:19:27.624 "w_mbytes_per_sec": 0 00:19:27.624 }, 00:19:27.624 "claimed": true, 00:19:27.624 "claim_type": "exclusive_write", 00:19:27.624 "zoned": false, 00:19:27.624 "supported_io_types": { 00:19:27.624 "read": true, 00:19:27.624 "write": true, 00:19:27.624 "unmap": true, 00:19:27.624 "write_zeroes": true, 00:19:27.624 "flush": true, 00:19:27.624 "reset": true, 00:19:27.624 "compare": false, 00:19:27.624 "compare_and_write": false, 00:19:27.624 "abort": true, 00:19:27.624 "nvme_admin": false, 00:19:27.624 "nvme_io": false 00:19:27.624 }, 00:19:27.624 "memory_domains": [ 00:19:27.624 { 00:19:27.624 "dma_device_id": "system", 00:19:27.624 "dma_device_type": 1 00:19:27.624 }, 00:19:27.624 { 00:19:27.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.624 "dma_device_type": 2 00:19:27.624 } 00:19:27.624 ], 00:19:27.624 "driver_specific": { 00:19:27.624 "passthru": { 00:19:27.624 "name": "pt1", 00:19:27.624 "base_bdev_name": "malloc1" 00:19:27.624 } 00:19:27.624 } 00:19:27.624 }' 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:27.624 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:27.882 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:27.882 "name": "pt2", 00:19:27.882 "aliases": [ 00:19:27.882 "90be89df-68a7-2952-9dcf-39f9deff7ad9" 00:19:27.882 ], 00:19:27.882 "product_name": "passthru", 00:19:27.882 "block_size": 512, 00:19:27.882 "num_blocks": 65536, 00:19:27.882 "uuid": "90be89df-68a7-2952-9dcf-39f9deff7ad9", 00:19:27.882 "assigned_rate_limits": { 00:19:27.882 "rw_ios_per_sec": 0, 00:19:27.882 "rw_mbytes_per_sec": 0, 00:19:27.882 "r_mbytes_per_sec": 0, 00:19:27.882 "w_mbytes_per_sec": 0 00:19:27.882 }, 00:19:27.882 "claimed": true, 00:19:27.882 "claim_type": "exclusive_write", 00:19:27.882 "zoned": false, 00:19:27.882 "supported_io_types": { 00:19:27.882 "read": true, 00:19:27.882 "write": true, 00:19:27.882 "unmap": true, 00:19:27.882 "write_zeroes": true, 00:19:27.882 "flush": true, 00:19:27.882 "reset": true, 00:19:27.882 "compare": false, 00:19:27.882 "compare_and_write": false, 00:19:27.882 "abort": true, 00:19:27.882 "nvme_admin": false, 00:19:27.882 "nvme_io": false 00:19:27.882 }, 00:19:27.882 "memory_domains": [ 00:19:27.882 { 00:19:27.882 "dma_device_id": "system", 00:19:27.882 "dma_device_type": 1 00:19:27.882 }, 00:19:27.882 { 00:19:27.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.883 "dma_device_type": 2 00:19:27.883 } 00:19:27.883 ], 00:19:27.883 "driver_specific": { 00:19:27.883 "passthru": { 00:19:27.883 "name": "pt2", 00:19:27.883 "base_bdev_name": "malloc2" 00:19:27.883 } 00:19:27.883 } 00:19:27.883 }' 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:27.883 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:28.210 [2024-05-16 07:32:21.752026] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6ac1fed6-1356-11ef-8e8f-9dd684e56d79 '!=' 6ac1fed6-1356-11ef-8e8f-9dd684e56d79 ']' 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:28.210 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 50026 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 50026 ']' 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 50026 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 50026 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:28.470 killing process with pid 50026 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50026' 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 50026 00:19:28.470 [2024-05-16 07:32:21.785583] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.470 [2024-05-16 07:32:21.785612] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 50026 00:19:28.470 [2024-05-16 07:32:21.785625] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.470 [2024-05-16 07:32:21.785629] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x828dd9180 name raid_bdev1, state offline 00:19:28.470 [2024-05-16 07:32:21.795339] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:28.470 00:19:28.470 real 0m9.408s 00:19:28.470 user 0m16.589s 00:19:28.470 sys 0m1.507s 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:28.470 07:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.470 ************************************ 00:19:28.470 END TEST raid_superblock_test 00:19:28.470 ************************************ 00:19:28.470 07:32:22 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:19:28.470 07:32:22 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:19:28.470 07:32:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:28.470 07:32:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:28.470 07:32:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.470 ************************************ 00:19:28.470 START TEST raid_state_function_test 00:19:28.470 ************************************ 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=50293 00:19:28.470 Process raid pid: 50293 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50293' 00:19:28.470 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 50293 /var/tmp/spdk-raid.sock 00:19:28.471 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 50293 ']' 00:19:28.471 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:28.471 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:28.471 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:28.471 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.471 07:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 [2024-05-16 07:32:22.024682] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:28.471 [2024-05-16 07:32:22.024824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:29.037 EAL: TSC is not safe to use in SMP mode 00:19:29.037 EAL: TSC is not invariant 00:19:29.037 [2024-05-16 07:32:22.490875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.037 [2024-05-16 07:32:22.584767] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:29.037 [2024-05-16 07:32:22.587360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.037 [2024-05-16 07:32:22.588256] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.037 [2024-05-16 07:32:22.588273] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.628 07:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:29.628 07:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:19:29.628 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:29.887 [2024-05-16 07:32:23.224443] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:29.887 [2024-05-16 07:32:23.224530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:29.887 [2024-05-16 07:32:23.224542] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.887 [2024-05-16 07:32:23.224561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.887 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.145 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.145 "name": "Existed_Raid", 00:19:30.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.145 "strip_size_kb": 64, 00:19:30.145 "state": "configuring", 00:19:30.145 "raid_level": "concat", 00:19:30.145 "superblock": false, 00:19:30.145 "num_base_bdevs": 2, 00:19:30.145 "num_base_bdevs_discovered": 0, 00:19:30.145 "num_base_bdevs_operational": 2, 00:19:30.145 "base_bdevs_list": [ 00:19:30.145 { 00:19:30.145 "name": "BaseBdev1", 00:19:30.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.145 "is_configured": false, 00:19:30.145 "data_offset": 0, 00:19:30.145 "data_size": 0 00:19:30.145 }, 00:19:30.145 { 00:19:30.145 "name": "BaseBdev2", 00:19:30.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.145 "is_configured": false, 00:19:30.145 "data_offset": 0, 00:19:30.145 "data_size": 0 00:19:30.145 } 00:19:30.145 ] 00:19:30.145 }' 00:19:30.145 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.145 07:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.408 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.667 [2024-05-16 07:32:23.976389] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.667 [2024-05-16 07:32:23.976413] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c002500 name Existed_Raid, state configuring 00:19:30.667 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:30.667 [2024-05-16 07:32:24.232399] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.667 [2024-05-16 07:32:24.232442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.667 [2024-05-16 07:32:24.232463] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.667 [2024-05-16 07:32:24.232470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.924 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:31.181 [2024-05-16 07:32:24.513304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.181 BaseBdev1 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:31.181 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.437 07:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.694 [ 00:19:31.694 { 00:19:31.694 "name": "BaseBdev1", 00:19:31.694 "aliases": [ 00:19:31.694 "705e64f1-1356-11ef-8e8f-9dd684e56d79" 00:19:31.694 ], 00:19:31.694 "product_name": "Malloc disk", 00:19:31.694 "block_size": 512, 00:19:31.694 "num_blocks": 65536, 00:19:31.694 "uuid": "705e64f1-1356-11ef-8e8f-9dd684e56d79", 00:19:31.694 "assigned_rate_limits": { 00:19:31.694 "rw_ios_per_sec": 0, 00:19:31.694 "rw_mbytes_per_sec": 0, 00:19:31.694 "r_mbytes_per_sec": 0, 00:19:31.694 "w_mbytes_per_sec": 0 00:19:31.694 }, 00:19:31.694 "claimed": true, 00:19:31.694 "claim_type": "exclusive_write", 00:19:31.694 "zoned": false, 00:19:31.694 "supported_io_types": { 00:19:31.694 "read": true, 00:19:31.694 "write": true, 00:19:31.694 "unmap": true, 00:19:31.694 "write_zeroes": true, 00:19:31.694 "flush": true, 00:19:31.694 "reset": true, 00:19:31.694 "compare": false, 00:19:31.694 "compare_and_write": false, 00:19:31.694 "abort": true, 00:19:31.694 "nvme_admin": false, 00:19:31.694 "nvme_io": false 00:19:31.694 }, 00:19:31.694 "memory_domains": [ 00:19:31.694 { 00:19:31.694 "dma_device_id": "system", 00:19:31.694 "dma_device_type": 1 00:19:31.694 }, 00:19:31.694 { 00:19:31.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.694 "dma_device_type": 2 00:19:31.694 } 00:19:31.694 ], 00:19:31.694 "driver_specific": {} 00:19:31.694 } 00:19:31.694 ] 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.694 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.951 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.951 "name": "Existed_Raid", 00:19:31.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.951 "strip_size_kb": 64, 00:19:31.951 "state": "configuring", 00:19:31.951 "raid_level": "concat", 00:19:31.951 "superblock": false, 00:19:31.951 "num_base_bdevs": 2, 00:19:31.951 "num_base_bdevs_discovered": 1, 00:19:31.951 "num_base_bdevs_operational": 2, 00:19:31.951 "base_bdevs_list": [ 00:19:31.951 { 00:19:31.951 "name": "BaseBdev1", 00:19:31.951 "uuid": "705e64f1-1356-11ef-8e8f-9dd684e56d79", 00:19:31.951 "is_configured": true, 00:19:31.951 "data_offset": 0, 00:19:31.951 "data_size": 65536 00:19:31.951 }, 00:19:31.951 { 00:19:31.951 "name": "BaseBdev2", 00:19:31.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.951 "is_configured": false, 00:19:31.951 "data_offset": 0, 00:19:31.951 "data_size": 0 00:19:31.951 } 00:19:31.951 ] 00:19:31.951 }' 00:19:31.951 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.951 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.209 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:32.466 [2024-05-16 07:32:25.868454] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.466 [2024-05-16 07:32:25.868485] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c002500 name Existed_Raid, state configuring 00:19:32.466 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:32.723 [2024-05-16 07:32:26.144466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.723 [2024-05-16 07:32:26.145134] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.723 [2024-05-16 07:32:26.145174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.723 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.982 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.982 "name": "Existed_Raid", 00:19:32.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.982 "strip_size_kb": 64, 00:19:32.982 "state": "configuring", 00:19:32.982 "raid_level": "concat", 00:19:32.982 "superblock": false, 00:19:32.982 "num_base_bdevs": 2, 00:19:32.982 "num_base_bdevs_discovered": 1, 00:19:32.982 "num_base_bdevs_operational": 2, 00:19:32.982 "base_bdevs_list": [ 00:19:32.982 { 00:19:32.982 "name": "BaseBdev1", 00:19:32.982 "uuid": "705e64f1-1356-11ef-8e8f-9dd684e56d79", 00:19:32.982 "is_configured": true, 00:19:32.982 "data_offset": 0, 00:19:32.982 "data_size": 65536 00:19:32.982 }, 00:19:32.982 { 00:19:32.982 "name": "BaseBdev2", 00:19:32.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.982 "is_configured": false, 00:19:32.982 "data_offset": 0, 00:19:32.982 "data_size": 0 00:19:32.982 } 00:19:32.982 ] 00:19:32.982 }' 00:19:32.982 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.982 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.240 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.499 [2024-05-16 07:32:26.980637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.499 [2024-05-16 07:32:26.980665] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c002a00 00:19:33.499 [2024-05-16 07:32:26.980669] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:33.499 [2024-05-16 07:32:26.980689] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c065ec0 00:19:33.499 [2024-05-16 07:32:26.980782] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c002a00 00:19:33.499 [2024-05-16 07:32:26.980786] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c002a00 00:19:33.499 [2024-05-16 07:32:26.980814] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.499 BaseBdev2 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:33.499 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.756 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:34.014 [ 00:19:34.014 { 00:19:34.014 "name": "BaseBdev2", 00:19:34.014 "aliases": [ 00:19:34.014 "71d6feaf-1356-11ef-8e8f-9dd684e56d79" 00:19:34.014 ], 00:19:34.014 "product_name": "Malloc disk", 00:19:34.014 "block_size": 512, 00:19:34.014 "num_blocks": 65536, 00:19:34.014 "uuid": "71d6feaf-1356-11ef-8e8f-9dd684e56d79", 00:19:34.014 "assigned_rate_limits": { 00:19:34.014 "rw_ios_per_sec": 0, 00:19:34.014 "rw_mbytes_per_sec": 0, 00:19:34.014 "r_mbytes_per_sec": 0, 00:19:34.014 "w_mbytes_per_sec": 0 00:19:34.014 }, 00:19:34.014 "claimed": true, 00:19:34.015 "claim_type": "exclusive_write", 00:19:34.015 "zoned": false, 00:19:34.015 "supported_io_types": { 00:19:34.015 "read": true, 00:19:34.015 "write": true, 00:19:34.015 "unmap": true, 00:19:34.015 "write_zeroes": true, 00:19:34.015 "flush": true, 00:19:34.015 "reset": true, 00:19:34.015 "compare": false, 00:19:34.015 "compare_and_write": false, 00:19:34.015 "abort": true, 00:19:34.015 "nvme_admin": false, 00:19:34.015 "nvme_io": false 00:19:34.015 }, 00:19:34.015 "memory_domains": [ 00:19:34.015 { 00:19:34.015 "dma_device_id": "system", 00:19:34.015 "dma_device_type": 1 00:19:34.015 }, 00:19:34.015 { 00:19:34.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.015 "dma_device_type": 2 00:19:34.015 } 00:19:34.015 ], 00:19:34.015 "driver_specific": {} 00:19:34.015 } 00:19:34.015 ] 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.015 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.272 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.272 "name": "Existed_Raid", 00:19:34.272 "uuid": "71d7049d-1356-11ef-8e8f-9dd684e56d79", 00:19:34.272 "strip_size_kb": 64, 00:19:34.272 "state": "online", 00:19:34.272 "raid_level": "concat", 00:19:34.272 "superblock": false, 00:19:34.272 "num_base_bdevs": 2, 00:19:34.272 "num_base_bdevs_discovered": 2, 00:19:34.272 "num_base_bdevs_operational": 2, 00:19:34.272 "base_bdevs_list": [ 00:19:34.272 { 00:19:34.272 "name": "BaseBdev1", 00:19:34.272 "uuid": "705e64f1-1356-11ef-8e8f-9dd684e56d79", 00:19:34.272 "is_configured": true, 00:19:34.272 "data_offset": 0, 00:19:34.272 "data_size": 65536 00:19:34.272 }, 00:19:34.272 { 00:19:34.272 "name": "BaseBdev2", 00:19:34.272 "uuid": "71d6feaf-1356-11ef-8e8f-9dd684e56d79", 00:19:34.272 "is_configured": true, 00:19:34.272 "data_offset": 0, 00:19:34.272 "data_size": 65536 00:19:34.272 } 00:19:34.272 ] 00:19:34.272 }' 00:19:34.272 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.272 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:34.529 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:34.786 [2024-05-16 07:32:28.236549] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.786 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:34.786 "name": "Existed_Raid", 00:19:34.786 "aliases": [ 00:19:34.786 "71d7049d-1356-11ef-8e8f-9dd684e56d79" 00:19:34.786 ], 00:19:34.786 "product_name": "Raid Volume", 00:19:34.786 "block_size": 512, 00:19:34.786 "num_blocks": 131072, 00:19:34.786 "uuid": "71d7049d-1356-11ef-8e8f-9dd684e56d79", 00:19:34.786 "assigned_rate_limits": { 00:19:34.786 "rw_ios_per_sec": 0, 00:19:34.786 "rw_mbytes_per_sec": 0, 00:19:34.786 "r_mbytes_per_sec": 0, 00:19:34.786 "w_mbytes_per_sec": 0 00:19:34.786 }, 00:19:34.786 "claimed": false, 00:19:34.786 "zoned": false, 00:19:34.786 "supported_io_types": { 00:19:34.786 "read": true, 00:19:34.786 "write": true, 00:19:34.786 "unmap": true, 00:19:34.786 "write_zeroes": true, 00:19:34.786 "flush": true, 00:19:34.786 "reset": true, 00:19:34.786 "compare": false, 00:19:34.786 "compare_and_write": false, 00:19:34.786 "abort": false, 00:19:34.786 "nvme_admin": false, 00:19:34.786 "nvme_io": false 00:19:34.786 }, 00:19:34.786 "memory_domains": [ 00:19:34.786 { 00:19:34.787 "dma_device_id": "system", 00:19:34.787 "dma_device_type": 1 00:19:34.787 }, 00:19:34.787 { 00:19:34.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.787 "dma_device_type": 2 00:19:34.787 }, 00:19:34.787 { 00:19:34.787 "dma_device_id": "system", 00:19:34.787 "dma_device_type": 1 00:19:34.787 }, 00:19:34.787 { 00:19:34.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.787 "dma_device_type": 2 00:19:34.787 } 00:19:34.787 ], 00:19:34.787 "driver_specific": { 00:19:34.787 "raid": { 00:19:34.787 "uuid": "71d7049d-1356-11ef-8e8f-9dd684e56d79", 00:19:34.787 "strip_size_kb": 64, 00:19:34.787 "state": "online", 00:19:34.787 "raid_level": "concat", 00:19:34.787 "superblock": false, 00:19:34.787 "num_base_bdevs": 2, 00:19:34.787 "num_base_bdevs_discovered": 2, 00:19:34.787 "num_base_bdevs_operational": 2, 00:19:34.787 "base_bdevs_list": [ 00:19:34.787 { 00:19:34.787 "name": "BaseBdev1", 00:19:34.787 "uuid": "705e64f1-1356-11ef-8e8f-9dd684e56d79", 00:19:34.787 "is_configured": true, 00:19:34.787 "data_offset": 0, 00:19:34.787 "data_size": 65536 00:19:34.787 }, 00:19:34.787 { 00:19:34.787 "name": "BaseBdev2", 00:19:34.787 "uuid": "71d6feaf-1356-11ef-8e8f-9dd684e56d79", 00:19:34.787 "is_configured": true, 00:19:34.787 "data_offset": 0, 00:19:34.787 "data_size": 65536 00:19:34.787 } 00:19:34.787 ] 00:19:34.787 } 00:19:34.787 } 00:19:34.787 }' 00:19:34.787 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.787 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:34.787 BaseBdev2' 00:19:34.787 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:34.787 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:34.787 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:35.045 "name": "BaseBdev1", 00:19:35.045 "aliases": [ 00:19:35.045 "705e64f1-1356-11ef-8e8f-9dd684e56d79" 00:19:35.045 ], 00:19:35.045 "product_name": "Malloc disk", 00:19:35.045 "block_size": 512, 00:19:35.045 "num_blocks": 65536, 00:19:35.045 "uuid": "705e64f1-1356-11ef-8e8f-9dd684e56d79", 00:19:35.045 "assigned_rate_limits": { 00:19:35.045 "rw_ios_per_sec": 0, 00:19:35.045 "rw_mbytes_per_sec": 0, 00:19:35.045 "r_mbytes_per_sec": 0, 00:19:35.045 "w_mbytes_per_sec": 0 00:19:35.045 }, 00:19:35.045 "claimed": true, 00:19:35.045 "claim_type": "exclusive_write", 00:19:35.045 "zoned": false, 00:19:35.045 "supported_io_types": { 00:19:35.045 "read": true, 00:19:35.045 "write": true, 00:19:35.045 "unmap": true, 00:19:35.045 "write_zeroes": true, 00:19:35.045 "flush": true, 00:19:35.045 "reset": true, 00:19:35.045 "compare": false, 00:19:35.045 "compare_and_write": false, 00:19:35.045 "abort": true, 00:19:35.045 "nvme_admin": false, 00:19:35.045 "nvme_io": false 00:19:35.045 }, 00:19:35.045 "memory_domains": [ 00:19:35.045 { 00:19:35.045 "dma_device_id": "system", 00:19:35.045 "dma_device_type": 1 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.045 "dma_device_type": 2 00:19:35.045 } 00:19:35.045 ], 00:19:35.045 "driver_specific": {} 00:19:35.045 }' 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:35.045 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:35.305 "name": "BaseBdev2", 00:19:35.305 "aliases": [ 00:19:35.305 "71d6feaf-1356-11ef-8e8f-9dd684e56d79" 00:19:35.305 ], 00:19:35.305 "product_name": "Malloc disk", 00:19:35.305 "block_size": 512, 00:19:35.305 "num_blocks": 65536, 00:19:35.305 "uuid": "71d6feaf-1356-11ef-8e8f-9dd684e56d79", 00:19:35.305 "assigned_rate_limits": { 00:19:35.305 "rw_ios_per_sec": 0, 00:19:35.305 "rw_mbytes_per_sec": 0, 00:19:35.305 "r_mbytes_per_sec": 0, 00:19:35.305 "w_mbytes_per_sec": 0 00:19:35.305 }, 00:19:35.305 "claimed": true, 00:19:35.305 "claim_type": "exclusive_write", 00:19:35.305 "zoned": false, 00:19:35.305 "supported_io_types": { 00:19:35.305 "read": true, 00:19:35.305 "write": true, 00:19:35.305 "unmap": true, 00:19:35.305 "write_zeroes": true, 00:19:35.305 "flush": true, 00:19:35.305 "reset": true, 00:19:35.305 "compare": false, 00:19:35.305 "compare_and_write": false, 00:19:35.305 "abort": true, 00:19:35.305 "nvme_admin": false, 00:19:35.305 "nvme_io": false 00:19:35.305 }, 00:19:35.305 "memory_domains": [ 00:19:35.305 { 00:19:35.305 "dma_device_id": "system", 00:19:35.305 "dma_device_type": 1 00:19:35.305 }, 00:19:35.305 { 00:19:35.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.305 "dma_device_type": 2 00:19:35.305 } 00:19:35.305 ], 00:19:35.305 "driver_specific": {} 00:19:35.305 }' 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:35.305 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:35.562 [2024-05-16 07:32:29.056545] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.562 [2024-05-16 07:32:29.056566] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.562 [2024-05-16 07:32:29.056579] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.562 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.820 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.820 "name": "Existed_Raid", 00:19:35.820 "uuid": "71d7049d-1356-11ef-8e8f-9dd684e56d79", 00:19:35.820 "strip_size_kb": 64, 00:19:35.820 "state": "offline", 00:19:35.820 "raid_level": "concat", 00:19:35.820 "superblock": false, 00:19:35.820 "num_base_bdevs": 2, 00:19:35.820 "num_base_bdevs_discovered": 1, 00:19:35.820 "num_base_bdevs_operational": 1, 00:19:35.820 "base_bdevs_list": [ 00:19:35.820 { 00:19:35.820 "name": null, 00:19:35.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.820 "is_configured": false, 00:19:35.820 "data_offset": 0, 00:19:35.820 "data_size": 65536 00:19:35.820 }, 00:19:35.820 { 00:19:35.820 "name": "BaseBdev2", 00:19:35.820 "uuid": "71d6feaf-1356-11ef-8e8f-9dd684e56d79", 00:19:35.820 "is_configured": true, 00:19:35.820 "data_offset": 0, 00:19:35.820 "data_size": 65536 00:19:35.820 } 00:19:35.820 ] 00:19:35.820 }' 00:19:35.820 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.820 07:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:36.389 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:36.647 [2024-05-16 07:32:30.157353] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:36.647 [2024-05-16 07:32:30.157381] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c002a00 name Existed_Raid, state offline 00:19:36.647 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:36.647 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:36.647 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.647 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 50293 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 50293 ']' 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 50293 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 50293 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:37.212 killing process with pid 50293 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50293' 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 50293 00:19:37.212 [2024-05-16 07:32:30.500993] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 50293 00:19:37.212 [2024-05-16 07:32:30.501020] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.212 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:19:37.212 00:19:37.212 real 0m8.660s 00:19:37.212 user 0m15.166s 00:19:37.213 sys 0m1.429s 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.213 ************************************ 00:19:37.213 END TEST raid_state_function_test 00:19:37.213 ************************************ 00:19:37.213 07:32:30 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:19:37.213 07:32:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:37.213 07:32:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:37.213 07:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.213 ************************************ 00:19:37.213 START TEST raid_state_function_test_sb 00:19:37.213 ************************************ 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=50564 00:19:37.213 Process raid pid: 50564 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 50564' 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 50564 /var/tmp/spdk-raid.sock 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 50564 ']' 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:37.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:37.213 07:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.213 [2024-05-16 07:32:30.724553] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:37.213 [2024-05-16 07:32:30.724762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:37.788 EAL: TSC is not safe to use in SMP mode 00:19:37.788 EAL: TSC is not invariant 00:19:37.788 [2024-05-16 07:32:31.173287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.788 [2024-05-16 07:32:31.259378] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:37.788 [2024-05-16 07:32:31.261511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.788 [2024-05-16 07:32:31.262247] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.788 [2024-05-16 07:32:31.262259] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.355 07:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:38.355 07:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:38.355 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:38.614 [2024-05-16 07:32:31.972662] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.614 [2024-05-16 07:32:31.972708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.614 [2024-05-16 07:32:31.972712] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.614 [2024-05-16 07:32:31.972720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.614 07:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.872 07:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.872 "name": "Existed_Raid", 00:19:38.872 "uuid": "74d0bbdc-1356-11ef-8e8f-9dd684e56d79", 00:19:38.872 "strip_size_kb": 64, 00:19:38.872 "state": "configuring", 00:19:38.872 "raid_level": "concat", 00:19:38.872 "superblock": true, 00:19:38.872 "num_base_bdevs": 2, 00:19:38.872 "num_base_bdevs_discovered": 0, 00:19:38.872 "num_base_bdevs_operational": 2, 00:19:38.872 "base_bdevs_list": [ 00:19:38.872 { 00:19:38.872 "name": "BaseBdev1", 00:19:38.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.872 "is_configured": false, 00:19:38.872 "data_offset": 0, 00:19:38.872 "data_size": 0 00:19:38.872 }, 00:19:38.872 { 00:19:38.872 "name": "BaseBdev2", 00:19:38.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.872 "is_configured": false, 00:19:38.872 "data_offset": 0, 00:19:38.873 "data_size": 0 00:19:38.873 } 00:19:38.873 ] 00:19:38.873 }' 00:19:38.873 07:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.873 07:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 07:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:39.389 [2024-05-16 07:32:32.792665] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.389 [2024-05-16 07:32:32.792692] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd7c500 name Existed_Raid, state configuring 00:19:39.389 07:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:39.648 [2024-05-16 07:32:33.060667] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:39.648 [2024-05-16 07:32:33.060713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:39.648 [2024-05-16 07:32:33.060718] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.648 [2024-05-16 07:32:33.060725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.648 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.906 [2024-05-16 07:32:33.309607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.906 BaseBdev1 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:39.906 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.216 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:40.473 [ 00:19:40.473 { 00:19:40.473 "name": "BaseBdev1", 00:19:40.473 "aliases": [ 00:19:40.473 "759c98b0-1356-11ef-8e8f-9dd684e56d79" 00:19:40.473 ], 00:19:40.473 "product_name": "Malloc disk", 00:19:40.473 "block_size": 512, 00:19:40.473 "num_blocks": 65536, 00:19:40.473 "uuid": "759c98b0-1356-11ef-8e8f-9dd684e56d79", 00:19:40.473 "assigned_rate_limits": { 00:19:40.473 "rw_ios_per_sec": 0, 00:19:40.473 "rw_mbytes_per_sec": 0, 00:19:40.473 "r_mbytes_per_sec": 0, 00:19:40.473 "w_mbytes_per_sec": 0 00:19:40.473 }, 00:19:40.473 "claimed": true, 00:19:40.473 "claim_type": "exclusive_write", 00:19:40.473 "zoned": false, 00:19:40.473 "supported_io_types": { 00:19:40.473 "read": true, 00:19:40.473 "write": true, 00:19:40.473 "unmap": true, 00:19:40.473 "write_zeroes": true, 00:19:40.473 "flush": true, 00:19:40.473 "reset": true, 00:19:40.473 "compare": false, 00:19:40.473 "compare_and_write": false, 00:19:40.473 "abort": true, 00:19:40.473 "nvme_admin": false, 00:19:40.473 "nvme_io": false 00:19:40.473 }, 00:19:40.473 "memory_domains": [ 00:19:40.473 { 00:19:40.473 "dma_device_id": "system", 00:19:40.473 "dma_device_type": 1 00:19:40.473 }, 00:19:40.473 { 00:19:40.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.473 "dma_device_type": 2 00:19:40.473 } 00:19:40.473 ], 00:19:40.473 "driver_specific": {} 00:19:40.473 } 00:19:40.473 ] 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.473 07:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.731 07:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.731 "name": "Existed_Raid", 00:19:40.731 "uuid": "7576c01f-1356-11ef-8e8f-9dd684e56d79", 00:19:40.731 "strip_size_kb": 64, 00:19:40.731 "state": "configuring", 00:19:40.731 "raid_level": "concat", 00:19:40.731 "superblock": true, 00:19:40.731 "num_base_bdevs": 2, 00:19:40.731 "num_base_bdevs_discovered": 1, 00:19:40.731 "num_base_bdevs_operational": 2, 00:19:40.731 "base_bdevs_list": [ 00:19:40.731 { 00:19:40.731 "name": "BaseBdev1", 00:19:40.731 "uuid": "759c98b0-1356-11ef-8e8f-9dd684e56d79", 00:19:40.731 "is_configured": true, 00:19:40.731 "data_offset": 2048, 00:19:40.731 "data_size": 63488 00:19:40.731 }, 00:19:40.731 { 00:19:40.731 "name": "BaseBdev2", 00:19:40.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.731 "is_configured": false, 00:19:40.731 "data_offset": 0, 00:19:40.731 "data_size": 0 00:19:40.731 } 00:19:40.731 ] 00:19:40.731 }' 00:19:40.731 07:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.731 07:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.298 07:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:41.298 [2024-05-16 07:32:34.804702] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.298 [2024-05-16 07:32:34.804732] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd7c500 name Existed_Raid, state configuring 00:19:41.298 07:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:41.557 [2024-05-16 07:32:35.084711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.557 [2024-05-16 07:32:35.085340] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:41.557 [2024-05-16 07:32:35.085375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.557 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.816 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.816 "name": "Existed_Raid", 00:19:41.816 "uuid": "76ab9850-1356-11ef-8e8f-9dd684e56d79", 00:19:41.816 "strip_size_kb": 64, 00:19:41.816 "state": "configuring", 00:19:41.816 "raid_level": "concat", 00:19:41.816 "superblock": true, 00:19:41.816 "num_base_bdevs": 2, 00:19:41.816 "num_base_bdevs_discovered": 1, 00:19:41.816 "num_base_bdevs_operational": 2, 00:19:41.816 "base_bdevs_list": [ 00:19:41.816 { 00:19:41.816 "name": "BaseBdev1", 00:19:41.816 "uuid": "759c98b0-1356-11ef-8e8f-9dd684e56d79", 00:19:41.816 "is_configured": true, 00:19:41.816 "data_offset": 2048, 00:19:41.816 "data_size": 63488 00:19:41.816 }, 00:19:41.816 { 00:19:41.816 "name": "BaseBdev2", 00:19:41.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.816 "is_configured": false, 00:19:41.816 "data_offset": 0, 00:19:41.816 "data_size": 0 00:19:41.816 } 00:19:41.816 ] 00:19:41.816 }' 00:19:41.816 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.816 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.075 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:42.335 [2024-05-16 07:32:35.816822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.335 [2024-05-16 07:32:35.816872] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd7ca00 00:19:42.335 [2024-05-16 07:32:35.816876] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:42.335 [2024-05-16 07:32:35.816892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bddfec0 00:19:42.335 [2024-05-16 07:32:35.816922] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd7ca00 00:19:42.335 [2024-05-16 07:32:35.816925] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bd7ca00 00:19:42.335 [2024-05-16 07:32:35.816938] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.335 BaseBdev2 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:42.335 07:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.594 07:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:42.854 [ 00:19:42.854 { 00:19:42.854 "name": "BaseBdev2", 00:19:42.854 "aliases": [ 00:19:42.854 "771b4acc-1356-11ef-8e8f-9dd684e56d79" 00:19:42.854 ], 00:19:42.854 "product_name": "Malloc disk", 00:19:42.854 "block_size": 512, 00:19:42.854 "num_blocks": 65536, 00:19:42.854 "uuid": "771b4acc-1356-11ef-8e8f-9dd684e56d79", 00:19:42.854 "assigned_rate_limits": { 00:19:42.854 "rw_ios_per_sec": 0, 00:19:42.854 "rw_mbytes_per_sec": 0, 00:19:42.854 "r_mbytes_per_sec": 0, 00:19:42.854 "w_mbytes_per_sec": 0 00:19:42.854 }, 00:19:42.854 "claimed": true, 00:19:42.854 "claim_type": "exclusive_write", 00:19:42.854 "zoned": false, 00:19:42.854 "supported_io_types": { 00:19:42.854 "read": true, 00:19:42.854 "write": true, 00:19:42.854 "unmap": true, 00:19:42.854 "write_zeroes": true, 00:19:42.854 "flush": true, 00:19:42.854 "reset": true, 00:19:42.854 "compare": false, 00:19:42.854 "compare_and_write": false, 00:19:42.854 "abort": true, 00:19:42.854 "nvme_admin": false, 00:19:42.854 "nvme_io": false 00:19:42.854 }, 00:19:42.854 "memory_domains": [ 00:19:42.854 { 00:19:42.854 "dma_device_id": "system", 00:19:42.854 "dma_device_type": 1 00:19:42.854 }, 00:19:42.854 { 00:19:42.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.854 "dma_device_type": 2 00:19:42.854 } 00:19:42.854 ], 00:19:42.854 "driver_specific": {} 00:19:42.854 } 00:19:42.854 ] 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:42.854 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:42.855 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.855 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.855 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.855 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.855 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.855 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.126 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.126 "name": "Existed_Raid", 00:19:43.126 "uuid": "76ab9850-1356-11ef-8e8f-9dd684e56d79", 00:19:43.126 "strip_size_kb": 64, 00:19:43.126 "state": "online", 00:19:43.126 "raid_level": "concat", 00:19:43.126 "superblock": true, 00:19:43.126 "num_base_bdevs": 2, 00:19:43.126 "num_base_bdevs_discovered": 2, 00:19:43.126 "num_base_bdevs_operational": 2, 00:19:43.126 "base_bdevs_list": [ 00:19:43.126 { 00:19:43.126 "name": "BaseBdev1", 00:19:43.126 "uuid": "759c98b0-1356-11ef-8e8f-9dd684e56d79", 00:19:43.126 "is_configured": true, 00:19:43.126 "data_offset": 2048, 00:19:43.126 "data_size": 63488 00:19:43.126 }, 00:19:43.126 { 00:19:43.126 "name": "BaseBdev2", 00:19:43.126 "uuid": "771b4acc-1356-11ef-8e8f-9dd684e56d79", 00:19:43.126 "is_configured": true, 00:19:43.126 "data_offset": 2048, 00:19:43.126 "data_size": 63488 00:19:43.126 } 00:19:43.126 ] 00:19:43.126 }' 00:19:43.126 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.126 07:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:43.386 07:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:43.645 [2024-05-16 07:32:37.204792] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.906 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:43.906 "name": "Existed_Raid", 00:19:43.906 "aliases": [ 00:19:43.906 "76ab9850-1356-11ef-8e8f-9dd684e56d79" 00:19:43.906 ], 00:19:43.906 "product_name": "Raid Volume", 00:19:43.906 "block_size": 512, 00:19:43.906 "num_blocks": 126976, 00:19:43.906 "uuid": "76ab9850-1356-11ef-8e8f-9dd684e56d79", 00:19:43.906 "assigned_rate_limits": { 00:19:43.906 "rw_ios_per_sec": 0, 00:19:43.906 "rw_mbytes_per_sec": 0, 00:19:43.906 "r_mbytes_per_sec": 0, 00:19:43.906 "w_mbytes_per_sec": 0 00:19:43.906 }, 00:19:43.906 "claimed": false, 00:19:43.906 "zoned": false, 00:19:43.906 "supported_io_types": { 00:19:43.906 "read": true, 00:19:43.906 "write": true, 00:19:43.906 "unmap": true, 00:19:43.906 "write_zeroes": true, 00:19:43.906 "flush": true, 00:19:43.906 "reset": true, 00:19:43.906 "compare": false, 00:19:43.906 "compare_and_write": false, 00:19:43.906 "abort": false, 00:19:43.906 "nvme_admin": false, 00:19:43.906 "nvme_io": false 00:19:43.906 }, 00:19:43.906 "memory_domains": [ 00:19:43.906 { 00:19:43.906 "dma_device_id": "system", 00:19:43.906 "dma_device_type": 1 00:19:43.906 }, 00:19:43.906 { 00:19:43.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.906 "dma_device_type": 2 00:19:43.906 }, 00:19:43.906 { 00:19:43.906 "dma_device_id": "system", 00:19:43.906 "dma_device_type": 1 00:19:43.906 }, 00:19:43.906 { 00:19:43.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.906 "dma_device_type": 2 00:19:43.906 } 00:19:43.906 ], 00:19:43.906 "driver_specific": { 00:19:43.906 "raid": { 00:19:43.906 "uuid": "76ab9850-1356-11ef-8e8f-9dd684e56d79", 00:19:43.906 "strip_size_kb": 64, 00:19:43.906 "state": "online", 00:19:43.906 "raid_level": "concat", 00:19:43.906 "superblock": true, 00:19:43.906 "num_base_bdevs": 2, 00:19:43.906 "num_base_bdevs_discovered": 2, 00:19:43.906 "num_base_bdevs_operational": 2, 00:19:43.906 "base_bdevs_list": [ 00:19:43.906 { 00:19:43.906 "name": "BaseBdev1", 00:19:43.906 "uuid": "759c98b0-1356-11ef-8e8f-9dd684e56d79", 00:19:43.906 "is_configured": true, 00:19:43.906 "data_offset": 2048, 00:19:43.906 "data_size": 63488 00:19:43.906 }, 00:19:43.906 { 00:19:43.906 "name": "BaseBdev2", 00:19:43.906 "uuid": "771b4acc-1356-11ef-8e8f-9dd684e56d79", 00:19:43.906 "is_configured": true, 00:19:43.906 "data_offset": 2048, 00:19:43.906 "data_size": 63488 00:19:43.906 } 00:19:43.906 ] 00:19:43.906 } 00:19:43.906 } 00:19:43.906 }' 00:19:43.906 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:43.906 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:19:43.906 BaseBdev2' 00:19:43.906 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:43.906 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:43.906 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:44.167 "name": "BaseBdev1", 00:19:44.167 "aliases": [ 00:19:44.167 "759c98b0-1356-11ef-8e8f-9dd684e56d79" 00:19:44.167 ], 00:19:44.167 "product_name": "Malloc disk", 00:19:44.167 "block_size": 512, 00:19:44.167 "num_blocks": 65536, 00:19:44.167 "uuid": "759c98b0-1356-11ef-8e8f-9dd684e56d79", 00:19:44.167 "assigned_rate_limits": { 00:19:44.167 "rw_ios_per_sec": 0, 00:19:44.167 "rw_mbytes_per_sec": 0, 00:19:44.167 "r_mbytes_per_sec": 0, 00:19:44.167 "w_mbytes_per_sec": 0 00:19:44.167 }, 00:19:44.167 "claimed": true, 00:19:44.167 "claim_type": "exclusive_write", 00:19:44.167 "zoned": false, 00:19:44.167 "supported_io_types": { 00:19:44.167 "read": true, 00:19:44.167 "write": true, 00:19:44.167 "unmap": true, 00:19:44.167 "write_zeroes": true, 00:19:44.167 "flush": true, 00:19:44.167 "reset": true, 00:19:44.167 "compare": false, 00:19:44.167 "compare_and_write": false, 00:19:44.167 "abort": true, 00:19:44.167 "nvme_admin": false, 00:19:44.167 "nvme_io": false 00:19:44.167 }, 00:19:44.167 "memory_domains": [ 00:19:44.167 { 00:19:44.167 "dma_device_id": "system", 00:19:44.167 "dma_device_type": 1 00:19:44.167 }, 00:19:44.167 { 00:19:44.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.167 "dma_device_type": 2 00:19:44.167 } 00:19:44.167 ], 00:19:44.167 "driver_specific": {} 00:19:44.167 }' 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:44.167 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:44.427 "name": "BaseBdev2", 00:19:44.427 "aliases": [ 00:19:44.427 "771b4acc-1356-11ef-8e8f-9dd684e56d79" 00:19:44.427 ], 00:19:44.427 "product_name": "Malloc disk", 00:19:44.427 "block_size": 512, 00:19:44.427 "num_blocks": 65536, 00:19:44.427 "uuid": "771b4acc-1356-11ef-8e8f-9dd684e56d79", 00:19:44.427 "assigned_rate_limits": { 00:19:44.427 "rw_ios_per_sec": 0, 00:19:44.427 "rw_mbytes_per_sec": 0, 00:19:44.427 "r_mbytes_per_sec": 0, 00:19:44.427 "w_mbytes_per_sec": 0 00:19:44.427 }, 00:19:44.427 "claimed": true, 00:19:44.427 "claim_type": "exclusive_write", 00:19:44.427 "zoned": false, 00:19:44.427 "supported_io_types": { 00:19:44.427 "read": true, 00:19:44.427 "write": true, 00:19:44.427 "unmap": true, 00:19:44.427 "write_zeroes": true, 00:19:44.427 "flush": true, 00:19:44.427 "reset": true, 00:19:44.427 "compare": false, 00:19:44.427 "compare_and_write": false, 00:19:44.427 "abort": true, 00:19:44.427 "nvme_admin": false, 00:19:44.427 "nvme_io": false 00:19:44.427 }, 00:19:44.427 "memory_domains": [ 00:19:44.427 { 00:19:44.427 "dma_device_id": "system", 00:19:44.427 "dma_device_type": 1 00:19:44.427 }, 00:19:44.427 { 00:19:44.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.427 "dma_device_type": 2 00:19:44.427 } 00:19:44.427 ], 00:19:44.427 "driver_specific": {} 00:19:44.427 }' 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:44.427 07:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:44.686 [2024-05-16 07:32:38.204787] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:44.686 [2024-05-16 07:32:38.204811] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.687 [2024-05-16 07:32:38.204823] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.687 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.944 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.944 "name": "Existed_Raid", 00:19:44.944 "uuid": "76ab9850-1356-11ef-8e8f-9dd684e56d79", 00:19:44.944 "strip_size_kb": 64, 00:19:44.944 "state": "offline", 00:19:44.944 "raid_level": "concat", 00:19:44.944 "superblock": true, 00:19:44.944 "num_base_bdevs": 2, 00:19:44.944 "num_base_bdevs_discovered": 1, 00:19:44.944 "num_base_bdevs_operational": 1, 00:19:44.944 "base_bdevs_list": [ 00:19:44.944 { 00:19:44.944 "name": null, 00:19:44.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.944 "is_configured": false, 00:19:44.944 "data_offset": 2048, 00:19:44.944 "data_size": 63488 00:19:44.944 }, 00:19:44.944 { 00:19:44.944 "name": "BaseBdev2", 00:19:44.944 "uuid": "771b4acc-1356-11ef-8e8f-9dd684e56d79", 00:19:44.944 "is_configured": true, 00:19:44.944 "data_offset": 2048, 00:19:44.944 "data_size": 63488 00:19:44.944 } 00:19:44.944 ] 00:19:44.944 }' 00:19:44.944 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.944 07:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.511 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:45.511 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:45.511 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:19:45.511 07:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.511 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:19:45.511 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:45.511 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:45.769 [2024-05-16 07:32:39.257609] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:45.769 [2024-05-16 07:32:39.257641] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd7ca00 name Existed_Raid, state offline 00:19:45.769 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:45.769 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:45.769 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.769 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:19:46.027 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:19:46.027 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 50564 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 50564 ']' 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 50564 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 50564 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:46.028 killing process with pid 50564 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50564' 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 50564 00:19:46.028 [2024-05-16 07:32:39.516822] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:46.028 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 50564 00:19:46.028 [2024-05-16 07:32:39.516865] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.286 07:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:19:46.286 00:19:46.286 real 0m8.975s 00:19:46.286 user 0m15.791s 00:19:46.286 sys 0m1.430s 00:19:46.286 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:46.286 07:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.286 ************************************ 00:19:46.286 END TEST raid_state_function_test_sb 00:19:46.286 ************************************ 00:19:46.286 07:32:39 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:19:46.286 07:32:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:46.286 07:32:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:46.286 07:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.286 ************************************ 00:19:46.286 START TEST raid_superblock_test 00:19:46.286 ************************************ 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=50838 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 50838 /var/tmp/spdk-raid.sock 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 50838 ']' 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.286 07:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.286 [2024-05-16 07:32:39.733066] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:46.286 [2024-05-16 07:32:39.733302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:46.853 EAL: TSC is not safe to use in SMP mode 00:19:46.853 EAL: TSC is not invariant 00:19:46.853 [2024-05-16 07:32:40.208388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.853 [2024-05-16 07:32:40.291628] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:46.853 [2024-05-16 07:32:40.293778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.853 [2024-05-16 07:32:40.294480] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.853 [2024-05-16 07:32:40.294504] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:47.418 07:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:47.676 malloc1 00:19:47.676 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:47.934 [2024-05-16 07:32:41.336998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:47.934 [2024-05-16 07:32:41.337051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.934 [2024-05-16 07:32:41.337636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a61c780 00:19:47.934 [2024-05-16 07:32:41.337664] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.934 [2024-05-16 07:32:41.338369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.934 [2024-05-16 07:32:41.338399] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:47.934 pt1 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:47.934 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:48.193 malloc2 00:19:48.193 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:48.451 [2024-05-16 07:32:41.897004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:48.451 [2024-05-16 07:32:41.897052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.451 [2024-05-16 07:32:41.897075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a61cc80 00:19:48.451 [2024-05-16 07:32:41.897082] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.451 [2024-05-16 07:32:41.897531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.451 [2024-05-16 07:32:41.897552] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:48.451 pt2 00:19:48.451 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:48.451 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:48.451 07:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:19:48.709 [2024-05-16 07:32:42.121024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:48.709 [2024-05-16 07:32:42.121430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:48.709 [2024-05-16 07:32:42.121473] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a61cf00 00:19:48.709 [2024-05-16 07:32:42.121479] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:48.709 [2024-05-16 07:32:42.121506] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a67fe20 00:19:48.709 [2024-05-16 07:32:42.121559] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a61cf00 00:19:48.709 [2024-05-16 07:32:42.121563] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a61cf00 00:19:48.709 [2024-05-16 07:32:42.121581] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.709 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.968 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.968 "name": "raid_bdev1", 00:19:48.968 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:48.968 "strip_size_kb": 64, 00:19:48.968 "state": "online", 00:19:48.968 "raid_level": "concat", 00:19:48.968 "superblock": true, 00:19:48.968 "num_base_bdevs": 2, 00:19:48.968 "num_base_bdevs_discovered": 2, 00:19:48.968 "num_base_bdevs_operational": 2, 00:19:48.968 "base_bdevs_list": [ 00:19:48.968 { 00:19:48.968 "name": "pt1", 00:19:48.968 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:48.968 "is_configured": true, 00:19:48.968 "data_offset": 2048, 00:19:48.968 "data_size": 63488 00:19:48.968 }, 00:19:48.968 { 00:19:48.968 "name": "pt2", 00:19:48.968 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:48.968 "is_configured": true, 00:19:48.968 "data_offset": 2048, 00:19:48.968 "data_size": 63488 00:19:48.968 } 00:19:48.968 ] 00:19:48.968 }' 00:19:48.968 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.968 07:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:49.226 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:49.485 [2024-05-16 07:32:42.957060] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.485 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:49.485 "name": "raid_bdev1", 00:19:49.485 "aliases": [ 00:19:49.485 "7add4052-1356-11ef-8e8f-9dd684e56d79" 00:19:49.485 ], 00:19:49.485 "product_name": "Raid Volume", 00:19:49.485 "block_size": 512, 00:19:49.485 "num_blocks": 126976, 00:19:49.485 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:49.485 "assigned_rate_limits": { 00:19:49.485 "rw_ios_per_sec": 0, 00:19:49.485 "rw_mbytes_per_sec": 0, 00:19:49.485 "r_mbytes_per_sec": 0, 00:19:49.485 "w_mbytes_per_sec": 0 00:19:49.485 }, 00:19:49.485 "claimed": false, 00:19:49.485 "zoned": false, 00:19:49.485 "supported_io_types": { 00:19:49.485 "read": true, 00:19:49.485 "write": true, 00:19:49.485 "unmap": true, 00:19:49.485 "write_zeroes": true, 00:19:49.485 "flush": true, 00:19:49.485 "reset": true, 00:19:49.485 "compare": false, 00:19:49.485 "compare_and_write": false, 00:19:49.485 "abort": false, 00:19:49.485 "nvme_admin": false, 00:19:49.485 "nvme_io": false 00:19:49.485 }, 00:19:49.485 "memory_domains": [ 00:19:49.485 { 00:19:49.485 "dma_device_id": "system", 00:19:49.485 "dma_device_type": 1 00:19:49.485 }, 00:19:49.485 { 00:19:49.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.485 "dma_device_type": 2 00:19:49.485 }, 00:19:49.485 { 00:19:49.485 "dma_device_id": "system", 00:19:49.485 "dma_device_type": 1 00:19:49.485 }, 00:19:49.485 { 00:19:49.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.485 "dma_device_type": 2 00:19:49.485 } 00:19:49.485 ], 00:19:49.485 "driver_specific": { 00:19:49.485 "raid": { 00:19:49.485 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:49.485 "strip_size_kb": 64, 00:19:49.485 "state": "online", 00:19:49.485 "raid_level": "concat", 00:19:49.485 "superblock": true, 00:19:49.485 "num_base_bdevs": 2, 00:19:49.485 "num_base_bdevs_discovered": 2, 00:19:49.485 "num_base_bdevs_operational": 2, 00:19:49.485 "base_bdevs_list": [ 00:19:49.485 { 00:19:49.485 "name": "pt1", 00:19:49.485 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:49.485 "is_configured": true, 00:19:49.485 "data_offset": 2048, 00:19:49.485 "data_size": 63488 00:19:49.485 }, 00:19:49.485 { 00:19:49.485 "name": "pt2", 00:19:49.485 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:49.485 "is_configured": true, 00:19:49.485 "data_offset": 2048, 00:19:49.485 "data_size": 63488 00:19:49.485 } 00:19:49.485 ] 00:19:49.485 } 00:19:49.485 } 00:19:49.485 }' 00:19:49.485 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.485 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:49.485 pt2' 00:19:49.485 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:49.485 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:49.485 07:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:49.744 "name": "pt1", 00:19:49.744 "aliases": [ 00:19:49.744 "4902ca46-c609-e552-a62b-2bead700eb28" 00:19:49.744 ], 00:19:49.744 "product_name": "passthru", 00:19:49.744 "block_size": 512, 00:19:49.744 "num_blocks": 65536, 00:19:49.744 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:49.744 "assigned_rate_limits": { 00:19:49.744 "rw_ios_per_sec": 0, 00:19:49.744 "rw_mbytes_per_sec": 0, 00:19:49.744 "r_mbytes_per_sec": 0, 00:19:49.744 "w_mbytes_per_sec": 0 00:19:49.744 }, 00:19:49.744 "claimed": true, 00:19:49.744 "claim_type": "exclusive_write", 00:19:49.744 "zoned": false, 00:19:49.744 "supported_io_types": { 00:19:49.744 "read": true, 00:19:49.744 "write": true, 00:19:49.744 "unmap": true, 00:19:49.744 "write_zeroes": true, 00:19:49.744 "flush": true, 00:19:49.744 "reset": true, 00:19:49.744 "compare": false, 00:19:49.744 "compare_and_write": false, 00:19:49.744 "abort": true, 00:19:49.744 "nvme_admin": false, 00:19:49.744 "nvme_io": false 00:19:49.744 }, 00:19:49.744 "memory_domains": [ 00:19:49.744 { 00:19:49.744 "dma_device_id": "system", 00:19:49.744 "dma_device_type": 1 00:19:49.744 }, 00:19:49.744 { 00:19:49.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.744 "dma_device_type": 2 00:19:49.744 } 00:19:49.744 ], 00:19:49.744 "driver_specific": { 00:19:49.744 "passthru": { 00:19:49.744 "name": "pt1", 00:19:49.744 "base_bdev_name": "malloc1" 00:19:49.744 } 00:19:49.744 } 00:19:49.744 }' 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:49.744 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:50.002 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:50.002 "name": "pt2", 00:19:50.002 "aliases": [ 00:19:50.002 "1b574fe5-c884-d953-9b47-04fb8b3e9452" 00:19:50.002 ], 00:19:50.002 "product_name": "passthru", 00:19:50.002 "block_size": 512, 00:19:50.002 "num_blocks": 65536, 00:19:50.002 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:50.002 "assigned_rate_limits": { 00:19:50.002 "rw_ios_per_sec": 0, 00:19:50.003 "rw_mbytes_per_sec": 0, 00:19:50.003 "r_mbytes_per_sec": 0, 00:19:50.003 "w_mbytes_per_sec": 0 00:19:50.003 }, 00:19:50.003 "claimed": true, 00:19:50.003 "claim_type": "exclusive_write", 00:19:50.003 "zoned": false, 00:19:50.003 "supported_io_types": { 00:19:50.003 "read": true, 00:19:50.003 "write": true, 00:19:50.003 "unmap": true, 00:19:50.003 "write_zeroes": true, 00:19:50.003 "flush": true, 00:19:50.003 "reset": true, 00:19:50.003 "compare": false, 00:19:50.003 "compare_and_write": false, 00:19:50.003 "abort": true, 00:19:50.003 "nvme_admin": false, 00:19:50.003 "nvme_io": false 00:19:50.003 }, 00:19:50.003 "memory_domains": [ 00:19:50.003 { 00:19:50.003 "dma_device_id": "system", 00:19:50.003 "dma_device_type": 1 00:19:50.003 }, 00:19:50.003 { 00:19:50.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.003 "dma_device_type": 2 00:19:50.003 } 00:19:50.003 ], 00:19:50.003 "driver_specific": { 00:19:50.003 "passthru": { 00:19:50.003 "name": "pt2", 00:19:50.003 "base_bdev_name": "malloc2" 00:19:50.003 } 00:19:50.003 } 00:19:50.003 }' 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:50.003 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:50.261 [2024-05-16 07:32:43.809115] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.519 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7add4052-1356-11ef-8e8f-9dd684e56d79 00:19:50.519 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7add4052-1356-11ef-8e8f-9dd684e56d79 ']' 00:19:50.519 07:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:50.783 [2024-05-16 07:32:44.093098] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.784 [2024-05-16 07:32:44.093120] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.784 [2024-05-16 07:32:44.093134] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.784 [2024-05-16 07:32:44.093143] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.784 [2024-05-16 07:32:44.093147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a61cf00 name raid_bdev1, state offline 00:19:50.784 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.784 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:50.784 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:50.784 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:50.784 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:50.784 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:51.042 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:51.042 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:51.301 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:51.301 07:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:51.559 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:51.560 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:19:51.819 [2024-05-16 07:32:45.301127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:51.819 [2024-05-16 07:32:45.301586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:51.819 [2024-05-16 07:32:45.301609] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:51.819 [2024-05-16 07:32:45.301649] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:51.819 [2024-05-16 07:32:45.301658] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.819 [2024-05-16 07:32:45.301662] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a61cc80 name raid_bdev1, state configuring 00:19:51.819 request: 00:19:51.819 { 00:19:51.819 "name": "raid_bdev1", 00:19:51.819 "raid_level": "concat", 00:19:51.819 "base_bdevs": [ 00:19:51.819 "malloc1", 00:19:51.819 "malloc2" 00:19:51.819 ], 00:19:51.819 "superblock": false, 00:19:51.819 "strip_size_kb": 64, 00:19:51.819 "method": "bdev_raid_create", 00:19:51.819 "req_id": 1 00:19:51.819 } 00:19:51.819 Got JSON-RPC error response 00:19:51.819 response: 00:19:51.819 { 00:19:51.819 "code": -17, 00:19:51.819 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:51.819 } 00:19:51.819 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:51.819 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.819 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.819 07:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.819 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.819 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:52.077 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:52.077 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:52.077 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:52.335 [2024-05-16 07:32:45.765129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:52.335 [2024-05-16 07:32:45.765172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.335 [2024-05-16 07:32:45.765212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a61c780 00:19:52.335 [2024-05-16 07:32:45.765219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.335 [2024-05-16 07:32:45.765718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.335 [2024-05-16 07:32:45.765754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:52.335 [2024-05-16 07:32:45.765773] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:52.335 [2024-05-16 07:32:45.765784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:52.335 pt1 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.335 07:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.622 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.622 "name": "raid_bdev1", 00:19:52.622 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:52.622 "strip_size_kb": 64, 00:19:52.622 "state": "configuring", 00:19:52.622 "raid_level": "concat", 00:19:52.622 "superblock": true, 00:19:52.622 "num_base_bdevs": 2, 00:19:52.622 "num_base_bdevs_discovered": 1, 00:19:52.622 "num_base_bdevs_operational": 2, 00:19:52.622 "base_bdevs_list": [ 00:19:52.622 { 00:19:52.622 "name": "pt1", 00:19:52.622 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:52.622 "is_configured": true, 00:19:52.622 "data_offset": 2048, 00:19:52.622 "data_size": 63488 00:19:52.622 }, 00:19:52.622 { 00:19:52.622 "name": null, 00:19:52.622 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:52.622 "is_configured": false, 00:19:52.622 "data_offset": 2048, 00:19:52.622 "data_size": 63488 00:19:52.622 } 00:19:52.622 ] 00:19:52.622 }' 00:19:52.623 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.623 07:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.881 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:52.881 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:52.881 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:52.881 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:53.138 [2024-05-16 07:32:46.621139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:53.138 [2024-05-16 07:32:46.621182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.138 [2024-05-16 07:32:46.621208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a61cf00 00:19:53.138 [2024-05-16 07:32:46.621215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.138 [2024-05-16 07:32:46.621284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.138 [2024-05-16 07:32:46.621292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:53.138 [2024-05-16 07:32:46.621308] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:53.138 [2024-05-16 07:32:46.621314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:53.138 [2024-05-16 07:32:46.621330] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a61d180 00:19:53.138 [2024-05-16 07:32:46.621334] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:53.138 [2024-05-16 07:32:46.621350] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a67fe20 00:19:53.138 [2024-05-16 07:32:46.621412] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a61d180 00:19:53.139 [2024-05-16 07:32:46.621416] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a61d180 00:19:53.139 [2024-05-16 07:32:46.621433] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.139 pt2 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.139 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.396 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.396 "name": "raid_bdev1", 00:19:53.396 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:53.396 "strip_size_kb": 64, 00:19:53.396 "state": "online", 00:19:53.396 "raid_level": "concat", 00:19:53.396 "superblock": true, 00:19:53.396 "num_base_bdevs": 2, 00:19:53.396 "num_base_bdevs_discovered": 2, 00:19:53.396 "num_base_bdevs_operational": 2, 00:19:53.396 "base_bdevs_list": [ 00:19:53.396 { 00:19:53.396 "name": "pt1", 00:19:53.396 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:53.396 "is_configured": true, 00:19:53.396 "data_offset": 2048, 00:19:53.396 "data_size": 63488 00:19:53.396 }, 00:19:53.396 { 00:19:53.396 "name": "pt2", 00:19:53.396 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:53.396 "is_configured": true, 00:19:53.396 "data_offset": 2048, 00:19:53.396 "data_size": 63488 00:19:53.396 } 00:19:53.396 ] 00:19:53.396 }' 00:19:53.396 07:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.396 07:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:19:53.654 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:53.912 [2024-05-16 07:32:47.437190] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.912 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:19:53.912 "name": "raid_bdev1", 00:19:53.912 "aliases": [ 00:19:53.912 "7add4052-1356-11ef-8e8f-9dd684e56d79" 00:19:53.912 ], 00:19:53.912 "product_name": "Raid Volume", 00:19:53.912 "block_size": 512, 00:19:53.912 "num_blocks": 126976, 00:19:53.912 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:53.912 "assigned_rate_limits": { 00:19:53.912 "rw_ios_per_sec": 0, 00:19:53.912 "rw_mbytes_per_sec": 0, 00:19:53.912 "r_mbytes_per_sec": 0, 00:19:53.912 "w_mbytes_per_sec": 0 00:19:53.912 }, 00:19:53.912 "claimed": false, 00:19:53.912 "zoned": false, 00:19:53.912 "supported_io_types": { 00:19:53.912 "read": true, 00:19:53.912 "write": true, 00:19:53.913 "unmap": true, 00:19:53.913 "write_zeroes": true, 00:19:53.913 "flush": true, 00:19:53.913 "reset": true, 00:19:53.913 "compare": false, 00:19:53.913 "compare_and_write": false, 00:19:53.913 "abort": false, 00:19:53.913 "nvme_admin": false, 00:19:53.913 "nvme_io": false 00:19:53.913 }, 00:19:53.913 "memory_domains": [ 00:19:53.913 { 00:19:53.913 "dma_device_id": "system", 00:19:53.913 "dma_device_type": 1 00:19:53.913 }, 00:19:53.913 { 00:19:53.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.913 "dma_device_type": 2 00:19:53.913 }, 00:19:53.913 { 00:19:53.913 "dma_device_id": "system", 00:19:53.913 "dma_device_type": 1 00:19:53.913 }, 00:19:53.913 { 00:19:53.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.913 "dma_device_type": 2 00:19:53.913 } 00:19:53.913 ], 00:19:53.913 "driver_specific": { 00:19:53.913 "raid": { 00:19:53.913 "uuid": "7add4052-1356-11ef-8e8f-9dd684e56d79", 00:19:53.913 "strip_size_kb": 64, 00:19:53.913 "state": "online", 00:19:53.913 "raid_level": "concat", 00:19:53.913 "superblock": true, 00:19:53.913 "num_base_bdevs": 2, 00:19:53.913 "num_base_bdevs_discovered": 2, 00:19:53.913 "num_base_bdevs_operational": 2, 00:19:53.913 "base_bdevs_list": [ 00:19:53.913 { 00:19:53.913 "name": "pt1", 00:19:53.913 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:53.913 "is_configured": true, 00:19:53.913 "data_offset": 2048, 00:19:53.913 "data_size": 63488 00:19:53.913 }, 00:19:53.913 { 00:19:53.913 "name": "pt2", 00:19:53.913 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:53.913 "is_configured": true, 00:19:53.913 "data_offset": 2048, 00:19:53.913 "data_size": 63488 00:19:53.913 } 00:19:53.913 ] 00:19:53.913 } 00:19:53.913 } 00:19:53.913 }' 00:19:53.913 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:53.913 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:19:53.913 pt2' 00:19:53.913 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:53.913 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:53.913 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:54.171 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:54.171 "name": "pt1", 00:19:54.171 "aliases": [ 00:19:54.171 "4902ca46-c609-e552-a62b-2bead700eb28" 00:19:54.171 ], 00:19:54.171 "product_name": "passthru", 00:19:54.171 "block_size": 512, 00:19:54.171 "num_blocks": 65536, 00:19:54.171 "uuid": "4902ca46-c609-e552-a62b-2bead700eb28", 00:19:54.171 "assigned_rate_limits": { 00:19:54.171 "rw_ios_per_sec": 0, 00:19:54.171 "rw_mbytes_per_sec": 0, 00:19:54.171 "r_mbytes_per_sec": 0, 00:19:54.171 "w_mbytes_per_sec": 0 00:19:54.171 }, 00:19:54.171 "claimed": true, 00:19:54.171 "claim_type": "exclusive_write", 00:19:54.171 "zoned": false, 00:19:54.171 "supported_io_types": { 00:19:54.171 "read": true, 00:19:54.171 "write": true, 00:19:54.171 "unmap": true, 00:19:54.171 "write_zeroes": true, 00:19:54.171 "flush": true, 00:19:54.171 "reset": true, 00:19:54.171 "compare": false, 00:19:54.171 "compare_and_write": false, 00:19:54.171 "abort": true, 00:19:54.171 "nvme_admin": false, 00:19:54.171 "nvme_io": false 00:19:54.171 }, 00:19:54.171 "memory_domains": [ 00:19:54.171 { 00:19:54.171 "dma_device_id": "system", 00:19:54.171 "dma_device_type": 1 00:19:54.171 }, 00:19:54.171 { 00:19:54.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.171 "dma_device_type": 2 00:19:54.171 } 00:19:54.171 ], 00:19:54.171 "driver_specific": { 00:19:54.171 "passthru": { 00:19:54.171 "name": "pt1", 00:19:54.171 "base_bdev_name": "malloc1" 00:19:54.171 } 00:19:54.171 } 00:19:54.171 }' 00:19:54.171 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.171 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.171 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:54.171 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.171 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:54.430 07:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:19:54.688 "name": "pt2", 00:19:54.688 "aliases": [ 00:19:54.688 "1b574fe5-c884-d953-9b47-04fb8b3e9452" 00:19:54.688 ], 00:19:54.688 "product_name": "passthru", 00:19:54.688 "block_size": 512, 00:19:54.688 "num_blocks": 65536, 00:19:54.688 "uuid": "1b574fe5-c884-d953-9b47-04fb8b3e9452", 00:19:54.688 "assigned_rate_limits": { 00:19:54.688 "rw_ios_per_sec": 0, 00:19:54.688 "rw_mbytes_per_sec": 0, 00:19:54.688 "r_mbytes_per_sec": 0, 00:19:54.688 "w_mbytes_per_sec": 0 00:19:54.688 }, 00:19:54.688 "claimed": true, 00:19:54.688 "claim_type": "exclusive_write", 00:19:54.688 "zoned": false, 00:19:54.688 "supported_io_types": { 00:19:54.688 "read": true, 00:19:54.688 "write": true, 00:19:54.688 "unmap": true, 00:19:54.688 "write_zeroes": true, 00:19:54.688 "flush": true, 00:19:54.688 "reset": true, 00:19:54.688 "compare": false, 00:19:54.688 "compare_and_write": false, 00:19:54.688 "abort": true, 00:19:54.688 "nvme_admin": false, 00:19:54.688 "nvme_io": false 00:19:54.688 }, 00:19:54.688 "memory_domains": [ 00:19:54.688 { 00:19:54.688 "dma_device_id": "system", 00:19:54.688 "dma_device_type": 1 00:19:54.688 }, 00:19:54.688 { 00:19:54.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.688 "dma_device_type": 2 00:19:54.688 } 00:19:54.688 ], 00:19:54.688 "driver_specific": { 00:19:54.688 "passthru": { 00:19:54.688 "name": "pt2", 00:19:54.688 "base_bdev_name": "malloc2" 00:19:54.688 } 00:19:54.688 } 00:19:54.688 }' 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.688 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:19:54.689 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:54.689 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.689 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:19:54.689 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:19:54.689 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:54.689 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:54.947 [2024-05-16 07:32:48.313207] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7add4052-1356-11ef-8e8f-9dd684e56d79 '!=' 7add4052-1356-11ef-8e8f-9dd684e56d79 ']' 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 50838 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 50838 ']' 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 50838 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 50838 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:19:54.947 killing process with pid 50838 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50838' 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 50838 00:19:54.947 [2024-05-16 07:32:48.343825] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.947 [2024-05-16 07:32:48.343842] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.947 [2024-05-16 07:32:48.343862] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.947 [2024-05-16 07:32:48.343867] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a61d180 name raid_bdev1, state offline 00:19:54.947 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 50838 00:19:54.947 [2024-05-16 07:32:48.353455] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.205 07:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:55.205 00:19:55.205 real 0m8.795s 00:19:55.205 user 0m15.343s 00:19:55.205 sys 0m1.524s 00:19:55.205 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:55.205 07:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.205 ************************************ 00:19:55.205 END TEST raid_superblock_test 00:19:55.205 ************************************ 00:19:55.205 07:32:48 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:19:55.205 07:32:48 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:19:55.205 07:32:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:55.205 07:32:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:55.205 07:32:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:55.205 ************************************ 00:19:55.205 START TEST raid_state_function_test 00:19:55.205 ************************************ 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:19:55.205 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=51101 00:19:55.206 Process raid pid: 51101 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 51101' 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 51101 /var/tmp/spdk-raid.sock 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 51101 ']' 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:55.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:55.206 07:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.206 [2024-05-16 07:32:48.570405] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:55.206 [2024-05-16 07:32:48.570617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:55.517 EAL: TSC is not safe to use in SMP mode 00:19:55.517 EAL: TSC is not invariant 00:19:55.517 [2024-05-16 07:32:49.062275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.774 [2024-05-16 07:32:49.144126] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:55.774 [2024-05-16 07:32:49.146240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.774 [2024-05-16 07:32:49.146946] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.774 [2024-05-16 07:32:49.146959] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.340 07:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:56.340 07:32:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:19:56.340 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:56.599 [2024-05-16 07:32:49.909615] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:56.599 [2024-05-16 07:32:49.909683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:56.599 [2024-05-16 07:32:49.909689] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.599 [2024-05-16 07:32:49.909697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.599 07:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.599 07:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.599 "name": "Existed_Raid", 00:19:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.599 "strip_size_kb": 0, 00:19:56.599 "state": "configuring", 00:19:56.599 "raid_level": "raid1", 00:19:56.599 "superblock": false, 00:19:56.599 "num_base_bdevs": 2, 00:19:56.599 "num_base_bdevs_discovered": 0, 00:19:56.599 "num_base_bdevs_operational": 2, 00:19:56.599 "base_bdevs_list": [ 00:19:56.599 { 00:19:56.599 "name": "BaseBdev1", 00:19:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.599 "is_configured": false, 00:19:56.599 "data_offset": 0, 00:19:56.599 "data_size": 0 00:19:56.599 }, 00:19:56.600 { 00:19:56.600 "name": "BaseBdev2", 00:19:56.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.600 "is_configured": false, 00:19:56.600 "data_offset": 0, 00:19:56.600 "data_size": 0 00:19:56.600 } 00:19:56.600 ] 00:19:56.600 }' 00:19:56.600 07:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.600 07:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.168 07:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:57.426 [2024-05-16 07:32:50.793630] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.426 [2024-05-16 07:32:50.793666] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ca91500 name Existed_Raid, state configuring 00:19:57.426 07:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:57.684 [2024-05-16 07:32:51.009628] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:57.684 [2024-05-16 07:32:51.009676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:57.684 [2024-05-16 07:32:51.009680] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.684 [2024-05-16 07:32:51.009688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.684 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:57.943 [2024-05-16 07:32:51.346617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.943 BaseBdev1 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:57.943 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.202 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:58.460 [ 00:19:58.460 { 00:19:58.460 "name": "BaseBdev1", 00:19:58.460 "aliases": [ 00:19:58.460 "805cd121-1356-11ef-8e8f-9dd684e56d79" 00:19:58.460 ], 00:19:58.460 "product_name": "Malloc disk", 00:19:58.460 "block_size": 512, 00:19:58.460 "num_blocks": 65536, 00:19:58.460 "uuid": "805cd121-1356-11ef-8e8f-9dd684e56d79", 00:19:58.460 "assigned_rate_limits": { 00:19:58.460 "rw_ios_per_sec": 0, 00:19:58.460 "rw_mbytes_per_sec": 0, 00:19:58.460 "r_mbytes_per_sec": 0, 00:19:58.460 "w_mbytes_per_sec": 0 00:19:58.460 }, 00:19:58.460 "claimed": true, 00:19:58.460 "claim_type": "exclusive_write", 00:19:58.460 "zoned": false, 00:19:58.460 "supported_io_types": { 00:19:58.460 "read": true, 00:19:58.460 "write": true, 00:19:58.460 "unmap": true, 00:19:58.460 "write_zeroes": true, 00:19:58.460 "flush": true, 00:19:58.460 "reset": true, 00:19:58.460 "compare": false, 00:19:58.460 "compare_and_write": false, 00:19:58.460 "abort": true, 00:19:58.460 "nvme_admin": false, 00:19:58.460 "nvme_io": false 00:19:58.460 }, 00:19:58.460 "memory_domains": [ 00:19:58.460 { 00:19:58.460 "dma_device_id": "system", 00:19:58.460 "dma_device_type": 1 00:19:58.460 }, 00:19:58.460 { 00:19:58.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.460 "dma_device_type": 2 00:19:58.460 } 00:19:58.460 ], 00:19:58.460 "driver_specific": {} 00:19:58.460 } 00:19:58.460 ] 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.460 07:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.719 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.719 "name": "Existed_Raid", 00:19:58.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.719 "strip_size_kb": 0, 00:19:58.719 "state": "configuring", 00:19:58.719 "raid_level": "raid1", 00:19:58.719 "superblock": false, 00:19:58.719 "num_base_bdevs": 2, 00:19:58.719 "num_base_bdevs_discovered": 1, 00:19:58.719 "num_base_bdevs_operational": 2, 00:19:58.719 "base_bdevs_list": [ 00:19:58.719 { 00:19:58.719 "name": "BaseBdev1", 00:19:58.719 "uuid": "805cd121-1356-11ef-8e8f-9dd684e56d79", 00:19:58.719 "is_configured": true, 00:19:58.719 "data_offset": 0, 00:19:58.719 "data_size": 65536 00:19:58.719 }, 00:19:58.719 { 00:19:58.719 "name": "BaseBdev2", 00:19:58.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.719 "is_configured": false, 00:19:58.719 "data_offset": 0, 00:19:58.719 "data_size": 0 00:19:58.719 } 00:19:58.719 ] 00:19:58.719 }' 00:19:58.719 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.719 07:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.977 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:59.236 [2024-05-16 07:32:52.649451] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.236 [2024-05-16 07:32:52.649480] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ca91500 name Existed_Raid, state configuring 00:19:59.236 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:19:59.493 [2024-05-16 07:32:52.949376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.493 [2024-05-16 07:32:52.950049] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.493 [2024-05-16 07:32:52.950091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.493 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.494 07:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.751 07:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.751 "name": "Existed_Raid", 00:19:59.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.751 "strip_size_kb": 0, 00:19:59.751 "state": "configuring", 00:19:59.751 "raid_level": "raid1", 00:19:59.751 "superblock": false, 00:19:59.751 "num_base_bdevs": 2, 00:19:59.752 "num_base_bdevs_discovered": 1, 00:19:59.752 "num_base_bdevs_operational": 2, 00:19:59.752 "base_bdevs_list": [ 00:19:59.752 { 00:19:59.752 "name": "BaseBdev1", 00:19:59.752 "uuid": "805cd121-1356-11ef-8e8f-9dd684e56d79", 00:19:59.752 "is_configured": true, 00:19:59.752 "data_offset": 0, 00:19:59.752 "data_size": 65536 00:19:59.752 }, 00:19:59.752 { 00:19:59.752 "name": "BaseBdev2", 00:19:59.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.752 "is_configured": false, 00:19:59.752 "data_offset": 0, 00:19:59.752 "data_size": 0 00:19:59.752 } 00:19:59.752 ] 00:19:59.752 }' 00:19:59.752 07:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.752 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.317 07:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:00.318 [2024-05-16 07:32:53.881246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.318 [2024-05-16 07:32:53.881271] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ca91a00 00:20:00.318 [2024-05-16 07:32:53.881274] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:00.318 [2024-05-16 07:32:53.881294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82caf4ec0 00:20:00.318 [2024-05-16 07:32:53.881377] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ca91a00 00:20:00.318 [2024-05-16 07:32:53.881381] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ca91a00 00:20:00.318 [2024-05-16 07:32:53.881408] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.575 BaseBdev2 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:00.575 07:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:00.834 [ 00:20:00.834 { 00:20:00.834 "name": "BaseBdev2", 00:20:00.834 "aliases": [ 00:20:00.834 "81dfb406-1356-11ef-8e8f-9dd684e56d79" 00:20:00.834 ], 00:20:00.834 "product_name": "Malloc disk", 00:20:00.834 "block_size": 512, 00:20:00.834 "num_blocks": 65536, 00:20:00.834 "uuid": "81dfb406-1356-11ef-8e8f-9dd684e56d79", 00:20:00.834 "assigned_rate_limits": { 00:20:00.834 "rw_ios_per_sec": 0, 00:20:00.834 "rw_mbytes_per_sec": 0, 00:20:00.834 "r_mbytes_per_sec": 0, 00:20:00.834 "w_mbytes_per_sec": 0 00:20:00.834 }, 00:20:00.834 "claimed": true, 00:20:00.834 "claim_type": "exclusive_write", 00:20:00.834 "zoned": false, 00:20:00.834 "supported_io_types": { 00:20:00.834 "read": true, 00:20:00.834 "write": true, 00:20:00.834 "unmap": true, 00:20:00.834 "write_zeroes": true, 00:20:00.834 "flush": true, 00:20:00.834 "reset": true, 00:20:00.834 "compare": false, 00:20:00.834 "compare_and_write": false, 00:20:00.834 "abort": true, 00:20:00.834 "nvme_admin": false, 00:20:00.834 "nvme_io": false 00:20:00.834 }, 00:20:00.834 "memory_domains": [ 00:20:00.834 { 00:20:00.834 "dma_device_id": "system", 00:20:00.834 "dma_device_type": 1 00:20:00.834 }, 00:20:00.834 { 00:20:00.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.834 "dma_device_type": 2 00:20:00.834 } 00:20:00.834 ], 00:20:00.834 "driver_specific": {} 00:20:00.834 } 00:20:00.834 ] 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.834 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.400 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.400 "name": "Existed_Raid", 00:20:01.400 "uuid": "81dfb97e-1356-11ef-8e8f-9dd684e56d79", 00:20:01.400 "strip_size_kb": 0, 00:20:01.400 "state": "online", 00:20:01.400 "raid_level": "raid1", 00:20:01.400 "superblock": false, 00:20:01.400 "num_base_bdevs": 2, 00:20:01.400 "num_base_bdevs_discovered": 2, 00:20:01.400 "num_base_bdevs_operational": 2, 00:20:01.400 "base_bdevs_list": [ 00:20:01.400 { 00:20:01.400 "name": "BaseBdev1", 00:20:01.400 "uuid": "805cd121-1356-11ef-8e8f-9dd684e56d79", 00:20:01.400 "is_configured": true, 00:20:01.400 "data_offset": 0, 00:20:01.400 "data_size": 65536 00:20:01.400 }, 00:20:01.400 { 00:20:01.400 "name": "BaseBdev2", 00:20:01.400 "uuid": "81dfb406-1356-11ef-8e8f-9dd684e56d79", 00:20:01.400 "is_configured": true, 00:20:01.400 "data_offset": 0, 00:20:01.400 "data_size": 65536 00:20:01.400 } 00:20:01.400 ] 00:20:01.400 }' 00:20:01.400 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.400 07:32:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:01.658 07:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:01.658 [2024-05-16 07:32:55.200841] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.658 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:01.658 "name": "Existed_Raid", 00:20:01.658 "aliases": [ 00:20:01.658 "81dfb97e-1356-11ef-8e8f-9dd684e56d79" 00:20:01.658 ], 00:20:01.658 "product_name": "Raid Volume", 00:20:01.658 "block_size": 512, 00:20:01.658 "num_blocks": 65536, 00:20:01.658 "uuid": "81dfb97e-1356-11ef-8e8f-9dd684e56d79", 00:20:01.658 "assigned_rate_limits": { 00:20:01.658 "rw_ios_per_sec": 0, 00:20:01.658 "rw_mbytes_per_sec": 0, 00:20:01.658 "r_mbytes_per_sec": 0, 00:20:01.658 "w_mbytes_per_sec": 0 00:20:01.658 }, 00:20:01.658 "claimed": false, 00:20:01.658 "zoned": false, 00:20:01.658 "supported_io_types": { 00:20:01.658 "read": true, 00:20:01.658 "write": true, 00:20:01.658 "unmap": false, 00:20:01.658 "write_zeroes": true, 00:20:01.658 "flush": false, 00:20:01.658 "reset": true, 00:20:01.658 "compare": false, 00:20:01.658 "compare_and_write": false, 00:20:01.658 "abort": false, 00:20:01.658 "nvme_admin": false, 00:20:01.658 "nvme_io": false 00:20:01.658 }, 00:20:01.658 "memory_domains": [ 00:20:01.658 { 00:20:01.658 "dma_device_id": "system", 00:20:01.658 "dma_device_type": 1 00:20:01.658 }, 00:20:01.658 { 00:20:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.658 "dma_device_type": 2 00:20:01.658 }, 00:20:01.658 { 00:20:01.658 "dma_device_id": "system", 00:20:01.658 "dma_device_type": 1 00:20:01.658 }, 00:20:01.658 { 00:20:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.658 "dma_device_type": 2 00:20:01.658 } 00:20:01.658 ], 00:20:01.658 "driver_specific": { 00:20:01.658 "raid": { 00:20:01.658 "uuid": "81dfb97e-1356-11ef-8e8f-9dd684e56d79", 00:20:01.658 "strip_size_kb": 0, 00:20:01.658 "state": "online", 00:20:01.658 "raid_level": "raid1", 00:20:01.658 "superblock": false, 00:20:01.658 "num_base_bdevs": 2, 00:20:01.658 "num_base_bdevs_discovered": 2, 00:20:01.658 "num_base_bdevs_operational": 2, 00:20:01.658 "base_bdevs_list": [ 00:20:01.658 { 00:20:01.658 "name": "BaseBdev1", 00:20:01.658 "uuid": "805cd121-1356-11ef-8e8f-9dd684e56d79", 00:20:01.658 "is_configured": true, 00:20:01.658 "data_offset": 0, 00:20:01.658 "data_size": 65536 00:20:01.658 }, 00:20:01.658 { 00:20:01.658 "name": "BaseBdev2", 00:20:01.658 "uuid": "81dfb406-1356-11ef-8e8f-9dd684e56d79", 00:20:01.658 "is_configured": true, 00:20:01.658 "data_offset": 0, 00:20:01.658 "data_size": 65536 00:20:01.658 } 00:20:01.658 ] 00:20:01.658 } 00:20:01.658 } 00:20:01.658 }' 00:20:01.658 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:01.916 BaseBdev2' 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:01.916 "name": "BaseBdev1", 00:20:01.916 "aliases": [ 00:20:01.916 "805cd121-1356-11ef-8e8f-9dd684e56d79" 00:20:01.916 ], 00:20:01.916 "product_name": "Malloc disk", 00:20:01.916 "block_size": 512, 00:20:01.916 "num_blocks": 65536, 00:20:01.916 "uuid": "805cd121-1356-11ef-8e8f-9dd684e56d79", 00:20:01.916 "assigned_rate_limits": { 00:20:01.916 "rw_ios_per_sec": 0, 00:20:01.916 "rw_mbytes_per_sec": 0, 00:20:01.916 "r_mbytes_per_sec": 0, 00:20:01.916 "w_mbytes_per_sec": 0 00:20:01.916 }, 00:20:01.916 "claimed": true, 00:20:01.916 "claim_type": "exclusive_write", 00:20:01.916 "zoned": false, 00:20:01.916 "supported_io_types": { 00:20:01.916 "read": true, 00:20:01.916 "write": true, 00:20:01.916 "unmap": true, 00:20:01.916 "write_zeroes": true, 00:20:01.916 "flush": true, 00:20:01.916 "reset": true, 00:20:01.916 "compare": false, 00:20:01.916 "compare_and_write": false, 00:20:01.916 "abort": true, 00:20:01.916 "nvme_admin": false, 00:20:01.916 "nvme_io": false 00:20:01.916 }, 00:20:01.916 "memory_domains": [ 00:20:01.916 { 00:20:01.916 "dma_device_id": "system", 00:20:01.916 "dma_device_type": 1 00:20:01.916 }, 00:20:01.916 { 00:20:01.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.916 "dma_device_type": 2 00:20:01.916 } 00:20:01.916 ], 00:20:01.916 "driver_specific": {} 00:20:01.916 }' 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:01.916 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:02.174 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:02.433 "name": "BaseBdev2", 00:20:02.433 "aliases": [ 00:20:02.433 "81dfb406-1356-11ef-8e8f-9dd684e56d79" 00:20:02.433 ], 00:20:02.433 "product_name": "Malloc disk", 00:20:02.433 "block_size": 512, 00:20:02.433 "num_blocks": 65536, 00:20:02.433 "uuid": "81dfb406-1356-11ef-8e8f-9dd684e56d79", 00:20:02.433 "assigned_rate_limits": { 00:20:02.433 "rw_ios_per_sec": 0, 00:20:02.433 "rw_mbytes_per_sec": 0, 00:20:02.433 "r_mbytes_per_sec": 0, 00:20:02.433 "w_mbytes_per_sec": 0 00:20:02.433 }, 00:20:02.433 "claimed": true, 00:20:02.433 "claim_type": "exclusive_write", 00:20:02.433 "zoned": false, 00:20:02.433 "supported_io_types": { 00:20:02.433 "read": true, 00:20:02.433 "write": true, 00:20:02.433 "unmap": true, 00:20:02.433 "write_zeroes": true, 00:20:02.433 "flush": true, 00:20:02.433 "reset": true, 00:20:02.433 "compare": false, 00:20:02.433 "compare_and_write": false, 00:20:02.433 "abort": true, 00:20:02.433 "nvme_admin": false, 00:20:02.433 "nvme_io": false 00:20:02.433 }, 00:20:02.433 "memory_domains": [ 00:20:02.433 { 00:20:02.433 "dma_device_id": "system", 00:20:02.433 "dma_device_type": 1 00:20:02.433 }, 00:20:02.433 { 00:20:02.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.433 "dma_device_type": 2 00:20:02.433 } 00:20:02.433 ], 00:20:02.433 "driver_specific": {} 00:20:02.433 }' 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:02.433 07:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:02.692 [2024-05-16 07:32:56.112612] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.692 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.951 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.951 "name": "Existed_Raid", 00:20:02.951 "uuid": "81dfb97e-1356-11ef-8e8f-9dd684e56d79", 00:20:02.951 "strip_size_kb": 0, 00:20:02.951 "state": "online", 00:20:02.951 "raid_level": "raid1", 00:20:02.951 "superblock": false, 00:20:02.951 "num_base_bdevs": 2, 00:20:02.951 "num_base_bdevs_discovered": 1, 00:20:02.951 "num_base_bdevs_operational": 1, 00:20:02.951 "base_bdevs_list": [ 00:20:02.951 { 00:20:02.951 "name": null, 00:20:02.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.951 "is_configured": false, 00:20:02.951 "data_offset": 0, 00:20:02.951 "data_size": 65536 00:20:02.951 }, 00:20:02.951 { 00:20:02.951 "name": "BaseBdev2", 00:20:02.951 "uuid": "81dfb406-1356-11ef-8e8f-9dd684e56d79", 00:20:02.951 "is_configured": true, 00:20:02.951 "data_offset": 0, 00:20:02.951 "data_size": 65536 00:20:02.951 } 00:20:02.951 ] 00:20:02.951 }' 00:20:02.951 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.951 07:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.211 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:03.211 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:03.211 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.211 07:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:03.469 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:03.469 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:03.469 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:03.728 [2024-05-16 07:32:57.265096] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:03.728 [2024-05-16 07:32:57.265129] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.728 [2024-05-16 07:32:57.269897] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.728 [2024-05-16 07:32:57.269912] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.728 [2024-05-16 07:32:57.269916] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ca91a00 name Existed_Raid, state offline 00:20:03.728 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:03.728 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:03.728 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:20:03.728 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 51101 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 51101 ']' 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 51101 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 51101 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:20:04.052 killing process with pid 51101 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51101' 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 51101 00:20:04.052 [2024-05-16 07:32:57.611621] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.052 [2024-05-16 07:32:57.611645] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.052 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 51101 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:20:04.310 00:20:04.310 real 0m9.221s 00:20:04.310 user 0m16.227s 00:20:04.310 sys 0m1.493s 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.310 ************************************ 00:20:04.310 END TEST raid_state_function_test 00:20:04.310 ************************************ 00:20:04.310 07:32:57 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:20:04.310 07:32:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:04.310 07:32:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:04.310 07:32:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.310 ************************************ 00:20:04.310 START TEST raid_state_function_test_sb 00:20:04.310 ************************************ 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=51376 00:20:04.310 Process raid pid: 51376 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 51376' 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 51376 /var/tmp/spdk-raid.sock 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 51376 ']' 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:04.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.310 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:04.310 [2024-05-16 07:32:57.834162] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:04.310 [2024-05-16 07:32:57.834434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:20:04.877 EAL: TSC is not safe to use in SMP mode 00:20:04.877 EAL: TSC is not invariant 00:20:04.877 [2024-05-16 07:32:58.349160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.135 [2024-05-16 07:32:58.445158] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:05.135 [2024-05-16 07:32:58.447806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.135 [2024-05-16 07:32:58.448686] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.135 [2024-05-16 07:32:58.448701] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.391 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:05.391 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:20:05.391 07:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:05.650 [2024-05-16 07:32:59.097117] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:05.650 [2024-05-16 07:32:59.097171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:05.650 [2024-05-16 07:32:59.097175] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.650 [2024-05-16 07:32:59.097183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.650 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.908 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.908 "name": "Existed_Raid", 00:20:05.908 "uuid": "84fb98ea-1356-11ef-8e8f-9dd684e56d79", 00:20:05.908 "strip_size_kb": 0, 00:20:05.908 "state": "configuring", 00:20:05.908 "raid_level": "raid1", 00:20:05.908 "superblock": true, 00:20:05.908 "num_base_bdevs": 2, 00:20:05.908 "num_base_bdevs_discovered": 0, 00:20:05.909 "num_base_bdevs_operational": 2, 00:20:05.909 "base_bdevs_list": [ 00:20:05.909 { 00:20:05.909 "name": "BaseBdev1", 00:20:05.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.909 "is_configured": false, 00:20:05.909 "data_offset": 0, 00:20:05.909 "data_size": 0 00:20:05.909 }, 00:20:05.909 { 00:20:05.909 "name": "BaseBdev2", 00:20:05.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.909 "is_configured": false, 00:20:05.909 "data_offset": 0, 00:20:05.909 "data_size": 0 00:20:05.909 } 00:20:05.909 ] 00:20:05.909 }' 00:20:05.909 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.909 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.167 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:06.425 [2024-05-16 07:32:59.892938] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:06.425 [2024-05-16 07:32:59.892966] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ca66500 name Existed_Raid, state configuring 00:20:06.425 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:06.696 [2024-05-16 07:33:00.164884] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.696 [2024-05-16 07:33:00.164948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.696 [2024-05-16 07:33:00.164953] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:06.696 [2024-05-16 07:33:00.164961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:06.696 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:06.989 [2024-05-16 07:33:00.417745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.989 BaseBdev1 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:06.989 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.248 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:07.248 [ 00:20:07.248 { 00:20:07.248 "name": "BaseBdev1", 00:20:07.248 "aliases": [ 00:20:07.248 "85c4f9fd-1356-11ef-8e8f-9dd684e56d79" 00:20:07.248 ], 00:20:07.248 "product_name": "Malloc disk", 00:20:07.248 "block_size": 512, 00:20:07.248 "num_blocks": 65536, 00:20:07.248 "uuid": "85c4f9fd-1356-11ef-8e8f-9dd684e56d79", 00:20:07.248 "assigned_rate_limits": { 00:20:07.248 "rw_ios_per_sec": 0, 00:20:07.248 "rw_mbytes_per_sec": 0, 00:20:07.248 "r_mbytes_per_sec": 0, 00:20:07.248 "w_mbytes_per_sec": 0 00:20:07.248 }, 00:20:07.248 "claimed": true, 00:20:07.248 "claim_type": "exclusive_write", 00:20:07.248 "zoned": false, 00:20:07.248 "supported_io_types": { 00:20:07.248 "read": true, 00:20:07.248 "write": true, 00:20:07.248 "unmap": true, 00:20:07.248 "write_zeroes": true, 00:20:07.248 "flush": true, 00:20:07.248 "reset": true, 00:20:07.248 "compare": false, 00:20:07.248 "compare_and_write": false, 00:20:07.248 "abort": true, 00:20:07.248 "nvme_admin": false, 00:20:07.248 "nvme_io": false 00:20:07.248 }, 00:20:07.248 "memory_domains": [ 00:20:07.248 { 00:20:07.248 "dma_device_id": "system", 00:20:07.248 "dma_device_type": 1 00:20:07.248 }, 00:20:07.248 { 00:20:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.248 "dma_device_type": 2 00:20:07.248 } 00:20:07.248 ], 00:20:07.248 "driver_specific": {} 00:20:07.248 } 00:20:07.248 ] 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.506 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.507 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.507 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.765 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.765 "name": "Existed_Raid", 00:20:07.765 "uuid": "859e8690-1356-11ef-8e8f-9dd684e56d79", 00:20:07.765 "strip_size_kb": 0, 00:20:07.765 "state": "configuring", 00:20:07.765 "raid_level": "raid1", 00:20:07.765 "superblock": true, 00:20:07.765 "num_base_bdevs": 2, 00:20:07.765 "num_base_bdevs_discovered": 1, 00:20:07.765 "num_base_bdevs_operational": 2, 00:20:07.765 "base_bdevs_list": [ 00:20:07.765 { 00:20:07.765 "name": "BaseBdev1", 00:20:07.765 "uuid": "85c4f9fd-1356-11ef-8e8f-9dd684e56d79", 00:20:07.765 "is_configured": true, 00:20:07.765 "data_offset": 2048, 00:20:07.765 "data_size": 63488 00:20:07.765 }, 00:20:07.765 { 00:20:07.765 "name": "BaseBdev2", 00:20:07.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.765 "is_configured": false, 00:20:07.765 "data_offset": 0, 00:20:07.765 "data_size": 0 00:20:07.765 } 00:20:07.765 ] 00:20:07.765 }' 00:20:07.765 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.765 07:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.023 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:08.280 [2024-05-16 07:33:01.648601] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:08.280 [2024-05-16 07:33:01.648634] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ca66500 name Existed_Raid, state configuring 00:20:08.280 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:20:08.538 [2024-05-16 07:33:01.904545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.538 [2024-05-16 07:33:01.905192] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.538 [2024-05-16 07:33:01.905229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.538 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.794 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.794 "name": "Existed_Raid", 00:20:08.794 "uuid": "86a7fa00-1356-11ef-8e8f-9dd684e56d79", 00:20:08.794 "strip_size_kb": 0, 00:20:08.794 "state": "configuring", 00:20:08.794 "raid_level": "raid1", 00:20:08.794 "superblock": true, 00:20:08.794 "num_base_bdevs": 2, 00:20:08.794 "num_base_bdevs_discovered": 1, 00:20:08.794 "num_base_bdevs_operational": 2, 00:20:08.794 "base_bdevs_list": [ 00:20:08.794 { 00:20:08.794 "name": "BaseBdev1", 00:20:08.794 "uuid": "85c4f9fd-1356-11ef-8e8f-9dd684e56d79", 00:20:08.794 "is_configured": true, 00:20:08.794 "data_offset": 2048, 00:20:08.794 "data_size": 63488 00:20:08.794 }, 00:20:08.794 { 00:20:08.794 "name": "BaseBdev2", 00:20:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.794 "is_configured": false, 00:20:08.794 "data_offset": 0, 00:20:08.794 "data_size": 0 00:20:08.794 } 00:20:08.794 ] 00:20:08.794 }' 00:20:08.794 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.794 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:09.050 [2024-05-16 07:33:02.544528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:09.050 [2024-05-16 07:33:02.544576] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ca66a00 00:20:09.050 [2024-05-16 07:33:02.544581] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:09.050 [2024-05-16 07:33:02.544597] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cac9ec0 00:20:09.050 [2024-05-16 07:33:02.544628] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ca66a00 00:20:09.050 [2024-05-16 07:33:02.544631] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ca66a00 00:20:09.050 [2024-05-16 07:33:02.544645] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.050 BaseBdev2 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:09.050 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:09.307 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:09.565 [ 00:20:09.565 { 00:20:09.565 "name": "BaseBdev2", 00:20:09.565 "aliases": [ 00:20:09.565 "87099daf-1356-11ef-8e8f-9dd684e56d79" 00:20:09.565 ], 00:20:09.565 "product_name": "Malloc disk", 00:20:09.565 "block_size": 512, 00:20:09.565 "num_blocks": 65536, 00:20:09.565 "uuid": "87099daf-1356-11ef-8e8f-9dd684e56d79", 00:20:09.565 "assigned_rate_limits": { 00:20:09.565 "rw_ios_per_sec": 0, 00:20:09.565 "rw_mbytes_per_sec": 0, 00:20:09.565 "r_mbytes_per_sec": 0, 00:20:09.565 "w_mbytes_per_sec": 0 00:20:09.565 }, 00:20:09.565 "claimed": true, 00:20:09.565 "claim_type": "exclusive_write", 00:20:09.565 "zoned": false, 00:20:09.565 "supported_io_types": { 00:20:09.565 "read": true, 00:20:09.565 "write": true, 00:20:09.565 "unmap": true, 00:20:09.565 "write_zeroes": true, 00:20:09.565 "flush": true, 00:20:09.565 "reset": true, 00:20:09.565 "compare": false, 00:20:09.565 "compare_and_write": false, 00:20:09.565 "abort": true, 00:20:09.565 "nvme_admin": false, 00:20:09.565 "nvme_io": false 00:20:09.565 }, 00:20:09.565 "memory_domains": [ 00:20:09.565 { 00:20:09.565 "dma_device_id": "system", 00:20:09.565 "dma_device_type": 1 00:20:09.565 }, 00:20:09.565 { 00:20:09.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.565 "dma_device_type": 2 00:20:09.565 } 00:20:09.565 ], 00:20:09.565 "driver_specific": {} 00:20:09.565 } 00:20:09.565 ] 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.565 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.824 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:09.824 "name": "Existed_Raid", 00:20:09.824 "uuid": "86a7fa00-1356-11ef-8e8f-9dd684e56d79", 00:20:09.824 "strip_size_kb": 0, 00:20:09.824 "state": "online", 00:20:09.824 "raid_level": "raid1", 00:20:09.824 "superblock": true, 00:20:09.824 "num_base_bdevs": 2, 00:20:09.824 "num_base_bdevs_discovered": 2, 00:20:09.824 "num_base_bdevs_operational": 2, 00:20:09.824 "base_bdevs_list": [ 00:20:09.824 { 00:20:09.824 "name": "BaseBdev1", 00:20:09.824 "uuid": "85c4f9fd-1356-11ef-8e8f-9dd684e56d79", 00:20:09.824 "is_configured": true, 00:20:09.824 "data_offset": 2048, 00:20:09.824 "data_size": 63488 00:20:09.824 }, 00:20:09.824 { 00:20:09.824 "name": "BaseBdev2", 00:20:09.824 "uuid": "87099daf-1356-11ef-8e8f-9dd684e56d79", 00:20:09.824 "is_configured": true, 00:20:09.824 "data_offset": 2048, 00:20:09.824 "data_size": 63488 00:20:09.824 } 00:20:09.824 ] 00:20:09.824 }' 00:20:09.824 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:09.824 07:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:10.082 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:10.341 [2024-05-16 07:33:03.872220] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.341 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:10.341 "name": "Existed_Raid", 00:20:10.341 "aliases": [ 00:20:10.341 "86a7fa00-1356-11ef-8e8f-9dd684e56d79" 00:20:10.341 ], 00:20:10.341 "product_name": "Raid Volume", 00:20:10.341 "block_size": 512, 00:20:10.341 "num_blocks": 63488, 00:20:10.341 "uuid": "86a7fa00-1356-11ef-8e8f-9dd684e56d79", 00:20:10.341 "assigned_rate_limits": { 00:20:10.341 "rw_ios_per_sec": 0, 00:20:10.341 "rw_mbytes_per_sec": 0, 00:20:10.341 "r_mbytes_per_sec": 0, 00:20:10.341 "w_mbytes_per_sec": 0 00:20:10.341 }, 00:20:10.341 "claimed": false, 00:20:10.341 "zoned": false, 00:20:10.341 "supported_io_types": { 00:20:10.341 "read": true, 00:20:10.341 "write": true, 00:20:10.341 "unmap": false, 00:20:10.341 "write_zeroes": true, 00:20:10.341 "flush": false, 00:20:10.341 "reset": true, 00:20:10.341 "compare": false, 00:20:10.341 "compare_and_write": false, 00:20:10.341 "abort": false, 00:20:10.341 "nvme_admin": false, 00:20:10.341 "nvme_io": false 00:20:10.341 }, 00:20:10.341 "memory_domains": [ 00:20:10.341 { 00:20:10.341 "dma_device_id": "system", 00:20:10.341 "dma_device_type": 1 00:20:10.341 }, 00:20:10.341 { 00:20:10.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.341 "dma_device_type": 2 00:20:10.341 }, 00:20:10.341 { 00:20:10.341 "dma_device_id": "system", 00:20:10.341 "dma_device_type": 1 00:20:10.341 }, 00:20:10.341 { 00:20:10.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.341 "dma_device_type": 2 00:20:10.341 } 00:20:10.341 ], 00:20:10.341 "driver_specific": { 00:20:10.341 "raid": { 00:20:10.341 "uuid": "86a7fa00-1356-11ef-8e8f-9dd684e56d79", 00:20:10.341 "strip_size_kb": 0, 00:20:10.341 "state": "online", 00:20:10.341 "raid_level": "raid1", 00:20:10.341 "superblock": true, 00:20:10.341 "num_base_bdevs": 2, 00:20:10.341 "num_base_bdevs_discovered": 2, 00:20:10.341 "num_base_bdevs_operational": 2, 00:20:10.341 "base_bdevs_list": [ 00:20:10.341 { 00:20:10.341 "name": "BaseBdev1", 00:20:10.341 "uuid": "85c4f9fd-1356-11ef-8e8f-9dd684e56d79", 00:20:10.341 "is_configured": true, 00:20:10.341 "data_offset": 2048, 00:20:10.341 "data_size": 63488 00:20:10.341 }, 00:20:10.341 { 00:20:10.341 "name": "BaseBdev2", 00:20:10.341 "uuid": "87099daf-1356-11ef-8e8f-9dd684e56d79", 00:20:10.341 "is_configured": true, 00:20:10.341 "data_offset": 2048, 00:20:10.341 "data_size": 63488 00:20:10.341 } 00:20:10.341 ] 00:20:10.341 } 00:20:10.341 } 00:20:10.341 }' 00:20:10.341 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.342 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:10.342 BaseBdev2' 00:20:10.342 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:10.342 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:10.342 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:10.600 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:10.600 "name": "BaseBdev1", 00:20:10.600 "aliases": [ 00:20:10.600 "85c4f9fd-1356-11ef-8e8f-9dd684e56d79" 00:20:10.600 ], 00:20:10.600 "product_name": "Malloc disk", 00:20:10.600 "block_size": 512, 00:20:10.600 "num_blocks": 65536, 00:20:10.600 "uuid": "85c4f9fd-1356-11ef-8e8f-9dd684e56d79", 00:20:10.600 "assigned_rate_limits": { 00:20:10.600 "rw_ios_per_sec": 0, 00:20:10.600 "rw_mbytes_per_sec": 0, 00:20:10.600 "r_mbytes_per_sec": 0, 00:20:10.600 "w_mbytes_per_sec": 0 00:20:10.600 }, 00:20:10.600 "claimed": true, 00:20:10.600 "claim_type": "exclusive_write", 00:20:10.600 "zoned": false, 00:20:10.600 "supported_io_types": { 00:20:10.600 "read": true, 00:20:10.600 "write": true, 00:20:10.600 "unmap": true, 00:20:10.600 "write_zeroes": true, 00:20:10.600 "flush": true, 00:20:10.600 "reset": true, 00:20:10.600 "compare": false, 00:20:10.600 "compare_and_write": false, 00:20:10.600 "abort": true, 00:20:10.600 "nvme_admin": false, 00:20:10.600 "nvme_io": false 00:20:10.600 }, 00:20:10.600 "memory_domains": [ 00:20:10.600 { 00:20:10.600 "dma_device_id": "system", 00:20:10.600 "dma_device_type": 1 00:20:10.600 }, 00:20:10.600 { 00:20:10.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.600 "dma_device_type": 2 00:20:10.600 } 00:20:10.600 ], 00:20:10.600 "driver_specific": {} 00:20:10.600 }' 00:20:10.600 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:10.600 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:10.600 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:10.600 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:10.859 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:11.118 "name": "BaseBdev2", 00:20:11.118 "aliases": [ 00:20:11.118 "87099daf-1356-11ef-8e8f-9dd684e56d79" 00:20:11.118 ], 00:20:11.118 "product_name": "Malloc disk", 00:20:11.118 "block_size": 512, 00:20:11.118 "num_blocks": 65536, 00:20:11.118 "uuid": "87099daf-1356-11ef-8e8f-9dd684e56d79", 00:20:11.118 "assigned_rate_limits": { 00:20:11.118 "rw_ios_per_sec": 0, 00:20:11.118 "rw_mbytes_per_sec": 0, 00:20:11.118 "r_mbytes_per_sec": 0, 00:20:11.118 "w_mbytes_per_sec": 0 00:20:11.118 }, 00:20:11.118 "claimed": true, 00:20:11.118 "claim_type": "exclusive_write", 00:20:11.118 "zoned": false, 00:20:11.118 "supported_io_types": { 00:20:11.118 "read": true, 00:20:11.118 "write": true, 00:20:11.118 "unmap": true, 00:20:11.118 "write_zeroes": true, 00:20:11.118 "flush": true, 00:20:11.118 "reset": true, 00:20:11.118 "compare": false, 00:20:11.118 "compare_and_write": false, 00:20:11.118 "abort": true, 00:20:11.118 "nvme_admin": false, 00:20:11.118 "nvme_io": false 00:20:11.118 }, 00:20:11.118 "memory_domains": [ 00:20:11.118 { 00:20:11.118 "dma_device_id": "system", 00:20:11.118 "dma_device_type": 1 00:20:11.118 }, 00:20:11.118 { 00:20:11.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.118 "dma_device_type": 2 00:20:11.118 } 00:20:11.118 ], 00:20:11.118 "driver_specific": {} 00:20:11.118 }' 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:11.118 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:11.376 [2024-05-16 07:33:04.784024] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.376 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.634 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.634 "name": "Existed_Raid", 00:20:11.634 "uuid": "86a7fa00-1356-11ef-8e8f-9dd684e56d79", 00:20:11.634 "strip_size_kb": 0, 00:20:11.634 "state": "online", 00:20:11.634 "raid_level": "raid1", 00:20:11.634 "superblock": true, 00:20:11.634 "num_base_bdevs": 2, 00:20:11.634 "num_base_bdevs_discovered": 1, 00:20:11.634 "num_base_bdevs_operational": 1, 00:20:11.634 "base_bdevs_list": [ 00:20:11.634 { 00:20:11.634 "name": null, 00:20:11.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.634 "is_configured": false, 00:20:11.634 "data_offset": 2048, 00:20:11.634 "data_size": 63488 00:20:11.634 }, 00:20:11.634 { 00:20:11.634 "name": "BaseBdev2", 00:20:11.634 "uuid": "87099daf-1356-11ef-8e8f-9dd684e56d79", 00:20:11.634 "is_configured": true, 00:20:11.634 "data_offset": 2048, 00:20:11.634 "data_size": 63488 00:20:11.634 } 00:20:11.634 ] 00:20:11.634 }' 00:20:11.634 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.634 07:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.893 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:11.893 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:11.893 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.893 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:12.200 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:12.200 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.200 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:12.457 [2024-05-16 07:33:05.904583] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:12.457 [2024-05-16 07:33:05.904621] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.457 [2024-05-16 07:33:05.909318] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.457 [2024-05-16 07:33:05.909331] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.457 [2024-05-16 07:33:05.909335] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ca66a00 name Existed_Raid, state offline 00:20:12.457 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:12.457 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:12.457 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.457 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 51376 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 51376 ']' 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 51376 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 51376 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:20:12.714 killing process with pid 51376 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51376' 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 51376 00:20:12.714 [2024-05-16 07:33:06.123622] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.714 [2024-05-16 07:33:06.123663] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:12.714 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 51376 00:20:12.972 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:20:12.972 00:20:12.972 real 0m8.472s 00:20:12.972 user 0m14.847s 00:20:12.972 sys 0m1.405s 00:20:12.972 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:12.972 07:33:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.972 ************************************ 00:20:12.972 END TEST raid_state_function_test_sb 00:20:12.972 ************************************ 00:20:12.972 07:33:06 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:20:12.972 07:33:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:12.972 07:33:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:12.972 07:33:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.972 ************************************ 00:20:12.972 START TEST raid_superblock_test 00:20:12.972 ************************************ 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=51646 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 51646 /var/tmp/spdk-raid.sock 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 51646 ']' 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:12.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:12.972 07:33:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.972 [2024-05-16 07:33:06.343374] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:12.972 [2024-05-16 07:33:06.343592] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:20:13.538 EAL: TSC is not safe to use in SMP mode 00:20:13.538 EAL: TSC is not invariant 00:20:13.538 [2024-05-16 07:33:06.821707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.538 [2024-05-16 07:33:06.906710] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:13.538 [2024-05-16 07:33:06.908879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.538 [2024-05-16 07:33:06.909576] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.538 [2024-05-16 07:33:06.909588] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:14.103 malloc1 00:20:14.103 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:14.362 [2024-05-16 07:33:07.852047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:14.362 [2024-05-16 07:33:07.852116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.362 [2024-05-16 07:33:07.852689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b740780 00:20:14.362 [2024-05-16 07:33:07.852715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.362 [2024-05-16 07:33:07.853532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.362 [2024-05-16 07:33:07.853588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:14.362 pt1 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:14.362 07:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:14.620 malloc2 00:20:14.621 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:14.889 [2024-05-16 07:33:08.343994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:14.889 [2024-05-16 07:33:08.344056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.889 [2024-05-16 07:33:08.344083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b740c80 00:20:14.889 [2024-05-16 07:33:08.344091] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.889 [2024-05-16 07:33:08.344603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.889 [2024-05-16 07:33:08.344632] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:14.889 pt2 00:20:14.889 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:14.889 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:14.889 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:20:15.178 [2024-05-16 07:33:08.587991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:15.178 [2024-05-16 07:33:08.588470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:15.178 [2024-05-16 07:33:08.588530] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b740f00 00:20:15.178 [2024-05-16 07:33:08.588535] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:15.178 [2024-05-16 07:33:08.588570] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7a3e20 00:20:15.178 [2024-05-16 07:33:08.588628] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b740f00 00:20:15.178 [2024-05-16 07:33:08.588631] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b740f00 00:20:15.178 [2024-05-16 07:33:08.588654] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.178 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.437 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.437 "name": "raid_bdev1", 00:20:15.437 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:15.437 "strip_size_kb": 0, 00:20:15.437 "state": "online", 00:20:15.437 "raid_level": "raid1", 00:20:15.437 "superblock": true, 00:20:15.437 "num_base_bdevs": 2, 00:20:15.437 "num_base_bdevs_discovered": 2, 00:20:15.437 "num_base_bdevs_operational": 2, 00:20:15.437 "base_bdevs_list": [ 00:20:15.437 { 00:20:15.437 "name": "pt1", 00:20:15.437 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:15.437 "is_configured": true, 00:20:15.437 "data_offset": 2048, 00:20:15.437 "data_size": 63488 00:20:15.437 }, 00:20:15.437 { 00:20:15.437 "name": "pt2", 00:20:15.437 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:15.437 "is_configured": true, 00:20:15.437 "data_offset": 2048, 00:20:15.437 "data_size": 63488 00:20:15.437 } 00:20:15.437 ] 00:20:15.437 }' 00:20:15.438 07:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.438 07:33:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:15.696 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:16.262 [2024-05-16 07:33:09.523853] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:16.262 "name": "raid_bdev1", 00:20:16.262 "aliases": [ 00:20:16.262 "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79" 00:20:16.262 ], 00:20:16.262 "product_name": "Raid Volume", 00:20:16.262 "block_size": 512, 00:20:16.262 "num_blocks": 63488, 00:20:16.262 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:16.262 "assigned_rate_limits": { 00:20:16.262 "rw_ios_per_sec": 0, 00:20:16.262 "rw_mbytes_per_sec": 0, 00:20:16.262 "r_mbytes_per_sec": 0, 00:20:16.262 "w_mbytes_per_sec": 0 00:20:16.262 }, 00:20:16.262 "claimed": false, 00:20:16.262 "zoned": false, 00:20:16.262 "supported_io_types": { 00:20:16.262 "read": true, 00:20:16.262 "write": true, 00:20:16.262 "unmap": false, 00:20:16.262 "write_zeroes": true, 00:20:16.262 "flush": false, 00:20:16.262 "reset": true, 00:20:16.262 "compare": false, 00:20:16.262 "compare_and_write": false, 00:20:16.262 "abort": false, 00:20:16.262 "nvme_admin": false, 00:20:16.262 "nvme_io": false 00:20:16.262 }, 00:20:16.262 "memory_domains": [ 00:20:16.262 { 00:20:16.262 "dma_device_id": "system", 00:20:16.262 "dma_device_type": 1 00:20:16.262 }, 00:20:16.262 { 00:20:16.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.262 "dma_device_type": 2 00:20:16.262 }, 00:20:16.262 { 00:20:16.262 "dma_device_id": "system", 00:20:16.262 "dma_device_type": 1 00:20:16.262 }, 00:20:16.262 { 00:20:16.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.262 "dma_device_type": 2 00:20:16.262 } 00:20:16.262 ], 00:20:16.262 "driver_specific": { 00:20:16.262 "raid": { 00:20:16.262 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:16.262 "strip_size_kb": 0, 00:20:16.262 "state": "online", 00:20:16.262 "raid_level": "raid1", 00:20:16.262 "superblock": true, 00:20:16.262 "num_base_bdevs": 2, 00:20:16.262 "num_base_bdevs_discovered": 2, 00:20:16.262 "num_base_bdevs_operational": 2, 00:20:16.262 "base_bdevs_list": [ 00:20:16.262 { 00:20:16.262 "name": "pt1", 00:20:16.262 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:16.262 "is_configured": true, 00:20:16.262 "data_offset": 2048, 00:20:16.262 "data_size": 63488 00:20:16.262 }, 00:20:16.262 { 00:20:16.262 "name": "pt2", 00:20:16.262 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:16.262 "is_configured": true, 00:20:16.262 "data_offset": 2048, 00:20:16.262 "data_size": 63488 00:20:16.262 } 00:20:16.262 ] 00:20:16.262 } 00:20:16.262 } 00:20:16.262 }' 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:20:16.262 pt2' 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:16.262 "name": "pt1", 00:20:16.262 "aliases": [ 00:20:16.262 "9d5baffc-2038-185b-b11c-6160a683177a" 00:20:16.262 ], 00:20:16.262 "product_name": "passthru", 00:20:16.262 "block_size": 512, 00:20:16.262 "num_blocks": 65536, 00:20:16.262 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:16.262 "assigned_rate_limits": { 00:20:16.262 "rw_ios_per_sec": 0, 00:20:16.262 "rw_mbytes_per_sec": 0, 00:20:16.262 "r_mbytes_per_sec": 0, 00:20:16.262 "w_mbytes_per_sec": 0 00:20:16.262 }, 00:20:16.262 "claimed": true, 00:20:16.262 "claim_type": "exclusive_write", 00:20:16.262 "zoned": false, 00:20:16.262 "supported_io_types": { 00:20:16.262 "read": true, 00:20:16.262 "write": true, 00:20:16.262 "unmap": true, 00:20:16.262 "write_zeroes": true, 00:20:16.262 "flush": true, 00:20:16.262 "reset": true, 00:20:16.262 "compare": false, 00:20:16.262 "compare_and_write": false, 00:20:16.262 "abort": true, 00:20:16.262 "nvme_admin": false, 00:20:16.262 "nvme_io": false 00:20:16.262 }, 00:20:16.262 "memory_domains": [ 00:20:16.262 { 00:20:16.262 "dma_device_id": "system", 00:20:16.262 "dma_device_type": 1 00:20:16.262 }, 00:20:16.262 { 00:20:16.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.262 "dma_device_type": 2 00:20:16.262 } 00:20:16.262 ], 00:20:16.262 "driver_specific": { 00:20:16.262 "passthru": { 00:20:16.262 "name": "pt1", 00:20:16.262 "base_bdev_name": "malloc1" 00:20:16.262 } 00:20:16.262 } 00:20:16.262 }' 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.262 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:16.520 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:16.520 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:16.520 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:16.520 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:16.520 07:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:16.520 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:16.520 "name": "pt2", 00:20:16.520 "aliases": [ 00:20:16.520 "ae825aa4-1f9f-f752-8be3-124e442cf035" 00:20:16.520 ], 00:20:16.520 "product_name": "passthru", 00:20:16.520 "block_size": 512, 00:20:16.520 "num_blocks": 65536, 00:20:16.520 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:16.520 "assigned_rate_limits": { 00:20:16.520 "rw_ios_per_sec": 0, 00:20:16.520 "rw_mbytes_per_sec": 0, 00:20:16.520 "r_mbytes_per_sec": 0, 00:20:16.520 "w_mbytes_per_sec": 0 00:20:16.520 }, 00:20:16.520 "claimed": true, 00:20:16.520 "claim_type": "exclusive_write", 00:20:16.520 "zoned": false, 00:20:16.520 "supported_io_types": { 00:20:16.520 "read": true, 00:20:16.520 "write": true, 00:20:16.520 "unmap": true, 00:20:16.520 "write_zeroes": true, 00:20:16.520 "flush": true, 00:20:16.520 "reset": true, 00:20:16.520 "compare": false, 00:20:16.520 "compare_and_write": false, 00:20:16.520 "abort": true, 00:20:16.520 "nvme_admin": false, 00:20:16.520 "nvme_io": false 00:20:16.520 }, 00:20:16.520 "memory_domains": [ 00:20:16.520 { 00:20:16.520 "dma_device_id": "system", 00:20:16.520 "dma_device_type": 1 00:20:16.520 }, 00:20:16.520 { 00:20:16.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.520 "dma_device_type": 2 00:20:16.520 } 00:20:16.520 ], 00:20:16.520 "driver_specific": { 00:20:16.520 "passthru": { 00:20:16.520 "name": "pt2", 00:20:16.520 "base_bdev_name": "malloc2" 00:20:16.520 } 00:20:16.520 } 00:20:16.520 }' 00:20:16.520 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:16.778 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:17.036 [2024-05-16 07:33:10.403706] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.036 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8aa3c9bd-1356-11ef-8e8f-9dd684e56d79 00:20:17.036 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8aa3c9bd-1356-11ef-8e8f-9dd684e56d79 ']' 00:20:17.036 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:17.294 [2024-05-16 07:33:10.695619] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.294 [2024-05-16 07:33:10.695647] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.294 [2024-05-16 07:33:10.695668] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.294 [2024-05-16 07:33:10.695682] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.294 [2024-05-16 07:33:10.695686] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b740f00 name raid_bdev1, state offline 00:20:17.294 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.294 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:17.551 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:17.551 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:17.551 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.551 07:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:17.809 07:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:17.809 07:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:18.110 07:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:18.110 07:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:18.383 07:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:20:18.641 [2024-05-16 07:33:12.027447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:18.641 [2024-05-16 07:33:12.027901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:18.641 [2024-05-16 07:33:12.027918] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:18.641 [2024-05-16 07:33:12.027956] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:18.641 [2024-05-16 07:33:12.027966] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:18.641 [2024-05-16 07:33:12.027970] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b740c80 name raid_bdev1, state configuring 00:20:18.641 request: 00:20:18.641 { 00:20:18.641 "name": "raid_bdev1", 00:20:18.641 "raid_level": "raid1", 00:20:18.641 "base_bdevs": [ 00:20:18.641 "malloc1", 00:20:18.641 "malloc2" 00:20:18.641 ], 00:20:18.641 "superblock": false, 00:20:18.641 "method": "bdev_raid_create", 00:20:18.641 "req_id": 1 00:20:18.641 } 00:20:18.641 Got JSON-RPC error response 00:20:18.641 response: 00:20:18.641 { 00:20:18.641 "code": -17, 00:20:18.641 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:18.641 } 00:20:18.641 07:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:18.641 07:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:18.641 07:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:18.641 07:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:18.641 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:18.641 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.900 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:18.900 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:18.900 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:19.157 [2024-05-16 07:33:12.531386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:19.157 [2024-05-16 07:33:12.531435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.157 [2024-05-16 07:33:12.531461] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b740780 00:20:19.157 [2024-05-16 07:33:12.531468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.157 [2024-05-16 07:33:12.531939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.157 [2024-05-16 07:33:12.531971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:19.157 [2024-05-16 07:33:12.531989] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:19.157 [2024-05-16 07:33:12.531999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:19.157 pt1 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.157 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.415 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.415 "name": "raid_bdev1", 00:20:19.415 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:19.415 "strip_size_kb": 0, 00:20:19.415 "state": "configuring", 00:20:19.415 "raid_level": "raid1", 00:20:19.415 "superblock": true, 00:20:19.415 "num_base_bdevs": 2, 00:20:19.415 "num_base_bdevs_discovered": 1, 00:20:19.415 "num_base_bdevs_operational": 2, 00:20:19.415 "base_bdevs_list": [ 00:20:19.415 { 00:20:19.415 "name": "pt1", 00:20:19.415 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:19.415 "is_configured": true, 00:20:19.415 "data_offset": 2048, 00:20:19.415 "data_size": 63488 00:20:19.415 }, 00:20:19.415 { 00:20:19.415 "name": null, 00:20:19.415 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:19.415 "is_configured": false, 00:20:19.415 "data_offset": 2048, 00:20:19.415 "data_size": 63488 00:20:19.415 } 00:20:19.415 ] 00:20:19.415 }' 00:20:19.415 07:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.415 07:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.673 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:19.673 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:19.673 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:19.673 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:19.932 [2024-05-16 07:33:13.455274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:19.932 [2024-05-16 07:33:13.455322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.932 [2024-05-16 07:33:13.455347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b740f00 00:20:19.932 [2024-05-16 07:33:13.455355] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.932 [2024-05-16 07:33:13.455435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.932 [2024-05-16 07:33:13.455444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:19.932 [2024-05-16 07:33:13.455460] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:19.932 [2024-05-16 07:33:13.455467] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:19.932 [2024-05-16 07:33:13.455487] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b741180 00:20:19.932 [2024-05-16 07:33:13.455490] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:19.932 [2024-05-16 07:33:13.455508] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7a3e20 00:20:19.932 [2024-05-16 07:33:13.455547] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b741180 00:20:19.932 [2024-05-16 07:33:13.455550] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b741180 00:20:19.932 [2024-05-16 07:33:13.455568] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.932 pt2 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.932 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.498 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.498 "name": "raid_bdev1", 00:20:20.498 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:20.498 "strip_size_kb": 0, 00:20:20.498 "state": "online", 00:20:20.498 "raid_level": "raid1", 00:20:20.498 "superblock": true, 00:20:20.498 "num_base_bdevs": 2, 00:20:20.498 "num_base_bdevs_discovered": 2, 00:20:20.498 "num_base_bdevs_operational": 2, 00:20:20.498 "base_bdevs_list": [ 00:20:20.498 { 00:20:20.498 "name": "pt1", 00:20:20.498 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:20.498 "is_configured": true, 00:20:20.498 "data_offset": 2048, 00:20:20.498 "data_size": 63488 00:20:20.498 }, 00:20:20.498 { 00:20:20.498 "name": "pt2", 00:20:20.498 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:20.498 "is_configured": true, 00:20:20.498 "data_offset": 2048, 00:20:20.498 "data_size": 63488 00:20:20.498 } 00:20:20.498 ] 00:20:20.498 }' 00:20:20.498 07:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.498 07:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.756 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:20.756 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:20:20.756 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:20.756 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:20.756 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:20.757 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:20.757 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:20.757 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:21.015 [2024-05-16 07:33:14.423161] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.015 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:21.015 "name": "raid_bdev1", 00:20:21.015 "aliases": [ 00:20:21.015 "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79" 00:20:21.015 ], 00:20:21.015 "product_name": "Raid Volume", 00:20:21.015 "block_size": 512, 00:20:21.015 "num_blocks": 63488, 00:20:21.015 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:21.015 "assigned_rate_limits": { 00:20:21.015 "rw_ios_per_sec": 0, 00:20:21.015 "rw_mbytes_per_sec": 0, 00:20:21.015 "r_mbytes_per_sec": 0, 00:20:21.015 "w_mbytes_per_sec": 0 00:20:21.015 }, 00:20:21.015 "claimed": false, 00:20:21.015 "zoned": false, 00:20:21.015 "supported_io_types": { 00:20:21.015 "read": true, 00:20:21.015 "write": true, 00:20:21.015 "unmap": false, 00:20:21.015 "write_zeroes": true, 00:20:21.015 "flush": false, 00:20:21.015 "reset": true, 00:20:21.015 "compare": false, 00:20:21.015 "compare_and_write": false, 00:20:21.015 "abort": false, 00:20:21.015 "nvme_admin": false, 00:20:21.015 "nvme_io": false 00:20:21.015 }, 00:20:21.015 "memory_domains": [ 00:20:21.015 { 00:20:21.015 "dma_device_id": "system", 00:20:21.015 "dma_device_type": 1 00:20:21.015 }, 00:20:21.015 { 00:20:21.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.015 "dma_device_type": 2 00:20:21.015 }, 00:20:21.015 { 00:20:21.015 "dma_device_id": "system", 00:20:21.015 "dma_device_type": 1 00:20:21.015 }, 00:20:21.015 { 00:20:21.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.015 "dma_device_type": 2 00:20:21.015 } 00:20:21.015 ], 00:20:21.015 "driver_specific": { 00:20:21.015 "raid": { 00:20:21.015 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:21.015 "strip_size_kb": 0, 00:20:21.015 "state": "online", 00:20:21.015 "raid_level": "raid1", 00:20:21.015 "superblock": true, 00:20:21.015 "num_base_bdevs": 2, 00:20:21.015 "num_base_bdevs_discovered": 2, 00:20:21.015 "num_base_bdevs_operational": 2, 00:20:21.015 "base_bdevs_list": [ 00:20:21.015 { 00:20:21.015 "name": "pt1", 00:20:21.015 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:21.015 "is_configured": true, 00:20:21.015 "data_offset": 2048, 00:20:21.015 "data_size": 63488 00:20:21.015 }, 00:20:21.015 { 00:20:21.015 "name": "pt2", 00:20:21.015 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:21.015 "is_configured": true, 00:20:21.015 "data_offset": 2048, 00:20:21.015 "data_size": 63488 00:20:21.015 } 00:20:21.015 ] 00:20:21.015 } 00:20:21.015 } 00:20:21.015 }' 00:20:21.015 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:21.015 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:20:21.015 pt2' 00:20:21.016 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:21.016 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:21.016 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:21.274 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:21.274 "name": "pt1", 00:20:21.274 "aliases": [ 00:20:21.274 "9d5baffc-2038-185b-b11c-6160a683177a" 00:20:21.274 ], 00:20:21.274 "product_name": "passthru", 00:20:21.274 "block_size": 512, 00:20:21.274 "num_blocks": 65536, 00:20:21.274 "uuid": "9d5baffc-2038-185b-b11c-6160a683177a", 00:20:21.274 "assigned_rate_limits": { 00:20:21.274 "rw_ios_per_sec": 0, 00:20:21.274 "rw_mbytes_per_sec": 0, 00:20:21.274 "r_mbytes_per_sec": 0, 00:20:21.274 "w_mbytes_per_sec": 0 00:20:21.274 }, 00:20:21.274 "claimed": true, 00:20:21.274 "claim_type": "exclusive_write", 00:20:21.275 "zoned": false, 00:20:21.275 "supported_io_types": { 00:20:21.275 "read": true, 00:20:21.275 "write": true, 00:20:21.275 "unmap": true, 00:20:21.275 "write_zeroes": true, 00:20:21.275 "flush": true, 00:20:21.275 "reset": true, 00:20:21.275 "compare": false, 00:20:21.275 "compare_and_write": false, 00:20:21.275 "abort": true, 00:20:21.275 "nvme_admin": false, 00:20:21.275 "nvme_io": false 00:20:21.275 }, 00:20:21.275 "memory_domains": [ 00:20:21.275 { 00:20:21.275 "dma_device_id": "system", 00:20:21.275 "dma_device_type": 1 00:20:21.275 }, 00:20:21.275 { 00:20:21.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.275 "dma_device_type": 2 00:20:21.275 } 00:20:21.275 ], 00:20:21.275 "driver_specific": { 00:20:21.275 "passthru": { 00:20:21.275 "name": "pt1", 00:20:21.275 "base_bdev_name": "malloc1" 00:20:21.275 } 00:20:21.275 } 00:20:21.275 }' 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:21.275 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:21.533 "name": "pt2", 00:20:21.533 "aliases": [ 00:20:21.533 "ae825aa4-1f9f-f752-8be3-124e442cf035" 00:20:21.533 ], 00:20:21.533 "product_name": "passthru", 00:20:21.533 "block_size": 512, 00:20:21.533 "num_blocks": 65536, 00:20:21.533 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:21.533 "assigned_rate_limits": { 00:20:21.533 "rw_ios_per_sec": 0, 00:20:21.533 "rw_mbytes_per_sec": 0, 00:20:21.533 "r_mbytes_per_sec": 0, 00:20:21.533 "w_mbytes_per_sec": 0 00:20:21.533 }, 00:20:21.533 "claimed": true, 00:20:21.533 "claim_type": "exclusive_write", 00:20:21.533 "zoned": false, 00:20:21.533 "supported_io_types": { 00:20:21.533 "read": true, 00:20:21.533 "write": true, 00:20:21.533 "unmap": true, 00:20:21.533 "write_zeroes": true, 00:20:21.533 "flush": true, 00:20:21.533 "reset": true, 00:20:21.533 "compare": false, 00:20:21.533 "compare_and_write": false, 00:20:21.533 "abort": true, 00:20:21.533 "nvme_admin": false, 00:20:21.533 "nvme_io": false 00:20:21.533 }, 00:20:21.533 "memory_domains": [ 00:20:21.533 { 00:20:21.533 "dma_device_id": "system", 00:20:21.533 "dma_device_type": 1 00:20:21.533 }, 00:20:21.533 { 00:20:21.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.533 "dma_device_type": 2 00:20:21.533 } 00:20:21.533 ], 00:20:21.533 "driver_specific": { 00:20:21.533 "passthru": { 00:20:21.533 "name": "pt2", 00:20:21.533 "base_bdev_name": "malloc2" 00:20:21.533 } 00:20:21.533 } 00:20:21.533 }' 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.533 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:21.792 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:22.050 [2024-05-16 07:33:15.371028] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.050 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8aa3c9bd-1356-11ef-8e8f-9dd684e56d79 '!=' 8aa3c9bd-1356-11ef-8e8f-9dd684e56d79 ']' 00:20:22.050 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:22.050 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:22.050 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:22.051 [2024-05-16 07:33:15.582995] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.051 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.310 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.310 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.310 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.586 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.586 "name": "raid_bdev1", 00:20:22.586 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:22.586 "strip_size_kb": 0, 00:20:22.586 "state": "online", 00:20:22.586 "raid_level": "raid1", 00:20:22.586 "superblock": true, 00:20:22.586 "num_base_bdevs": 2, 00:20:22.586 "num_base_bdevs_discovered": 1, 00:20:22.586 "num_base_bdevs_operational": 1, 00:20:22.586 "base_bdevs_list": [ 00:20:22.586 { 00:20:22.586 "name": null, 00:20:22.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.586 "is_configured": false, 00:20:22.586 "data_offset": 2048, 00:20:22.586 "data_size": 63488 00:20:22.586 }, 00:20:22.586 { 00:20:22.586 "name": "pt2", 00:20:22.586 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:22.586 "is_configured": true, 00:20:22.586 "data_offset": 2048, 00:20:22.586 "data_size": 63488 00:20:22.586 } 00:20:22.586 ] 00:20:22.586 }' 00:20:22.586 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.586 07:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.847 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:23.106 [2024-05-16 07:33:16.522896] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.106 [2024-05-16 07:33:16.522920] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.106 [2024-05-16 07:33:16.522940] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.106 [2024-05-16 07:33:16.522950] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.106 [2024-05-16 07:33:16.522955] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b741180 name raid_bdev1, state offline 00:20:23.106 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.106 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:23.364 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:23.364 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:23.364 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:23.364 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:23.364 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:23.621 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:23.621 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:23.621 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:23.621 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:23.621 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:20:23.621 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:23.880 [2024-05-16 07:33:17.386815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:23.880 [2024-05-16 07:33:17.386910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.880 [2024-05-16 07:33:17.386949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b740f00 00:20:23.880 [2024-05-16 07:33:17.386966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.880 [2024-05-16 07:33:17.387525] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.880 [2024-05-16 07:33:17.387561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:23.880 [2024-05-16 07:33:17.387596] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:23.880 [2024-05-16 07:33:17.387614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:23.880 [2024-05-16 07:33:17.387644] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b741180 00:20:23.880 [2024-05-16 07:33:17.387653] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:23.880 [2024-05-16 07:33:17.387686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7a3e20 00:20:23.880 [2024-05-16 07:33:17.387733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b741180 00:20:23.880 [2024-05-16 07:33:17.387742] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b741180 00:20:23.880 [2024-05-16 07:33:17.387771] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.880 pt2 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.880 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.138 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.139 "name": "raid_bdev1", 00:20:24.139 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:24.139 "strip_size_kb": 0, 00:20:24.139 "state": "online", 00:20:24.139 "raid_level": "raid1", 00:20:24.139 "superblock": true, 00:20:24.139 "num_base_bdevs": 2, 00:20:24.139 "num_base_bdevs_discovered": 1, 00:20:24.139 "num_base_bdevs_operational": 1, 00:20:24.139 "base_bdevs_list": [ 00:20:24.139 { 00:20:24.139 "name": null, 00:20:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.139 "is_configured": false, 00:20:24.139 "data_offset": 2048, 00:20:24.139 "data_size": 63488 00:20:24.139 }, 00:20:24.139 { 00:20:24.139 "name": "pt2", 00:20:24.139 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:24.139 "is_configured": true, 00:20:24.139 "data_offset": 2048, 00:20:24.139 "data_size": 63488 00:20:24.139 } 00:20:24.139 ] 00:20:24.139 }' 00:20:24.139 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.139 07:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.707 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:24.965 [2024-05-16 07:33:18.338680] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.965 [2024-05-16 07:33:18.338708] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.965 [2024-05-16 07:33:18.338730] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.965 [2024-05-16 07:33:18.338741] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.965 [2024-05-16 07:33:18.338746] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b741180 name raid_bdev1, state offline 00:20:24.965 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.965 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:25.223 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:25.223 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:25.223 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:25.223 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:25.789 [2024-05-16 07:33:19.062634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:25.789 [2024-05-16 07:33:19.062694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.789 [2024-05-16 07:33:19.062722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b740c80 00:20:25.789 [2024-05-16 07:33:19.062730] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.789 [2024-05-16 07:33:19.063233] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.789 [2024-05-16 07:33:19.063255] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:25.789 [2024-05-16 07:33:19.063277] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:25.789 [2024-05-16 07:33:19.063287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:25.789 [2024-05-16 07:33:19.063313] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:25.789 [2024-05-16 07:33:19.063317] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.789 [2024-05-16 07:33:19.063322] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b740780 name raid_bdev1, state configuring 00:20:25.789 [2024-05-16 07:33:19.063329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.789 [2024-05-16 07:33:19.063341] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b740780 00:20:25.789 [2024-05-16 07:33:19.063344] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:25.789 [2024-05-16 07:33:19.063363] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b7a3e20 00:20:25.789 [2024-05-16 07:33:19.063397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b740780 00:20:25.789 [2024-05-16 07:33:19.063400] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b740780 00:20:25.789 [2024-05-16 07:33:19.063417] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.789 pt1 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.789 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.046 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.046 "name": "raid_bdev1", 00:20:26.046 "uuid": "8aa3c9bd-1356-11ef-8e8f-9dd684e56d79", 00:20:26.046 "strip_size_kb": 0, 00:20:26.046 "state": "online", 00:20:26.046 "raid_level": "raid1", 00:20:26.046 "superblock": true, 00:20:26.046 "num_base_bdevs": 2, 00:20:26.046 "num_base_bdevs_discovered": 1, 00:20:26.046 "num_base_bdevs_operational": 1, 00:20:26.046 "base_bdevs_list": [ 00:20:26.046 { 00:20:26.046 "name": null, 00:20:26.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.046 "is_configured": false, 00:20:26.046 "data_offset": 2048, 00:20:26.046 "data_size": 63488 00:20:26.046 }, 00:20:26.046 { 00:20:26.046 "name": "pt2", 00:20:26.046 "uuid": "ae825aa4-1f9f-f752-8be3-124e442cf035", 00:20:26.046 "is_configured": true, 00:20:26.046 "data_offset": 2048, 00:20:26.046 "data_size": 63488 00:20:26.046 } 00:20:26.046 ] 00:20:26.046 }' 00:20:26.046 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.046 07:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.304 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:26.304 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:26.562 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:26.562 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:26.562 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:26.821 [2024-05-16 07:33:20.346525] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8aa3c9bd-1356-11ef-8e8f-9dd684e56d79 '!=' 8aa3c9bd-1356-11ef-8e8f-9dd684e56d79 ']' 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 51646 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 51646 ']' 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 51646 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 51646 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:20:26.821 killing process with pid 51646 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 51646' 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 51646 00:20:26.821 [2024-05-16 07:33:20.380193] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.821 [2024-05-16 07:33:20.380231] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.821 [2024-05-16 07:33:20.380245] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.821 [2024-05-16 07:33:20.380250] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b740780 name raid_bdev1, state offline 00:20:26.821 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 51646 00:20:27.081 [2024-05-16 07:33:20.389868] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.081 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:27.081 00:20:27.081 real 0m14.228s 00:20:27.081 user 0m25.670s 00:20:27.081 sys 0m2.057s 00:20:27.081 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:27.081 07:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.081 ************************************ 00:20:27.081 END TEST raid_superblock_test 00:20:27.081 ************************************ 00:20:27.081 07:33:20 bdev_raid -- bdev/bdev_raid.sh@801 -- # for n in {2..4} 00:20:27.081 07:33:20 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:20:27.081 07:33:20 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:20:27.081 07:33:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:27.081 07:33:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:27.081 07:33:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.081 ************************************ 00:20:27.081 START TEST raid_state_function_test 00:20:27.081 ************************************ 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=52041 00:20:27.081 Process raid pid: 52041 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 52041' 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 52041 /var/tmp/spdk-raid.sock 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 52041 ']' 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:27.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:27.081 07:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.081 [2024-05-16 07:33:20.619257] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:27.081 [2024-05-16 07:33:20.619502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:20:27.648 EAL: TSC is not safe to use in SMP mode 00:20:27.648 EAL: TSC is not invariant 00:20:27.648 [2024-05-16 07:33:21.133952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.906 [2024-05-16 07:33:21.231071] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:27.906 [2024-05-16 07:33:21.233740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.906 [2024-05-16 07:33:21.234667] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.906 [2024-05-16 07:33:21.234683] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.165 07:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:28.165 07:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:20:28.165 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:28.425 [2024-05-16 07:33:21.955169] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:28.425 [2024-05-16 07:33:21.955221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:28.425 [2024-05-16 07:33:21.955226] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.425 [2024-05-16 07:33:21.955234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.425 [2024-05-16 07:33:21.955237] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.425 [2024-05-16 07:33:21.955243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.426 07:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.990 07:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.990 "name": "Existed_Raid", 00:20:28.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.990 "strip_size_kb": 64, 00:20:28.990 "state": "configuring", 00:20:28.990 "raid_level": "raid0", 00:20:28.990 "superblock": false, 00:20:28.990 "num_base_bdevs": 3, 00:20:28.990 "num_base_bdevs_discovered": 0, 00:20:28.990 "num_base_bdevs_operational": 3, 00:20:28.990 "base_bdevs_list": [ 00:20:28.990 { 00:20:28.990 "name": "BaseBdev1", 00:20:28.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.990 "is_configured": false, 00:20:28.990 "data_offset": 0, 00:20:28.990 "data_size": 0 00:20:28.990 }, 00:20:28.990 { 00:20:28.990 "name": "BaseBdev2", 00:20:28.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.990 "is_configured": false, 00:20:28.990 "data_offset": 0, 00:20:28.990 "data_size": 0 00:20:28.990 }, 00:20:28.990 { 00:20:28.990 "name": "BaseBdev3", 00:20:28.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.990 "is_configured": false, 00:20:28.990 "data_offset": 0, 00:20:28.990 "data_size": 0 00:20:28.990 } 00:20:28.990 ] 00:20:28.990 }' 00:20:28.990 07:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.990 07:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.248 07:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:29.506 [2024-05-16 07:33:23.043064] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.506 [2024-05-16 07:33:23.043104] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cf45500 name Existed_Raid, state configuring 00:20:29.766 07:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:30.023 [2024-05-16 07:33:23.375037] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:30.023 [2024-05-16 07:33:23.375100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:30.023 [2024-05-16 07:33:23.375118] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.023 [2024-05-16 07:33:23.375134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.023 [2024-05-16 07:33:23.375168] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.023 [2024-05-16 07:33:23.375185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.024 07:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:30.282 [2024-05-16 07:33:23.688102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.282 BaseBdev1 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:30.282 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:30.540 07:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:30.797 [ 00:20:30.797 { 00:20:30.797 "name": "BaseBdev1", 00:20:30.797 "aliases": [ 00:20:30.797 "93a3b86e-1356-11ef-8e8f-9dd684e56d79" 00:20:30.797 ], 00:20:30.797 "product_name": "Malloc disk", 00:20:30.797 "block_size": 512, 00:20:30.797 "num_blocks": 65536, 00:20:30.797 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:30.797 "assigned_rate_limits": { 00:20:30.797 "rw_ios_per_sec": 0, 00:20:30.797 "rw_mbytes_per_sec": 0, 00:20:30.797 "r_mbytes_per_sec": 0, 00:20:30.797 "w_mbytes_per_sec": 0 00:20:30.797 }, 00:20:30.797 "claimed": true, 00:20:30.797 "claim_type": "exclusive_write", 00:20:30.797 "zoned": false, 00:20:30.797 "supported_io_types": { 00:20:30.797 "read": true, 00:20:30.797 "write": true, 00:20:30.797 "unmap": true, 00:20:30.797 "write_zeroes": true, 00:20:30.797 "flush": true, 00:20:30.797 "reset": true, 00:20:30.797 "compare": false, 00:20:30.797 "compare_and_write": false, 00:20:30.797 "abort": true, 00:20:30.797 "nvme_admin": false, 00:20:30.797 "nvme_io": false 00:20:30.797 }, 00:20:30.797 "memory_domains": [ 00:20:30.797 { 00:20:30.797 "dma_device_id": "system", 00:20:30.797 "dma_device_type": 1 00:20:30.797 }, 00:20:30.797 { 00:20:30.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.797 "dma_device_type": 2 00:20:30.797 } 00:20:30.797 ], 00:20:30.797 "driver_specific": {} 00:20:30.797 } 00:20:30.797 ] 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.797 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.055 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.055 "name": "Existed_Raid", 00:20:31.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.055 "strip_size_kb": 64, 00:20:31.055 "state": "configuring", 00:20:31.055 "raid_level": "raid0", 00:20:31.055 "superblock": false, 00:20:31.055 "num_base_bdevs": 3, 00:20:31.055 "num_base_bdevs_discovered": 1, 00:20:31.055 "num_base_bdevs_operational": 3, 00:20:31.055 "base_bdevs_list": [ 00:20:31.055 { 00:20:31.055 "name": "BaseBdev1", 00:20:31.055 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:31.055 "is_configured": true, 00:20:31.055 "data_offset": 0, 00:20:31.055 "data_size": 65536 00:20:31.055 }, 00:20:31.055 { 00:20:31.055 "name": "BaseBdev2", 00:20:31.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.055 "is_configured": false, 00:20:31.055 "data_offset": 0, 00:20:31.055 "data_size": 0 00:20:31.055 }, 00:20:31.055 { 00:20:31.055 "name": "BaseBdev3", 00:20:31.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.055 "is_configured": false, 00:20:31.055 "data_offset": 0, 00:20:31.055 "data_size": 0 00:20:31.055 } 00:20:31.055 ] 00:20:31.055 }' 00:20:31.055 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.055 07:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.314 07:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:31.572 [2024-05-16 07:33:25.110856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:31.572 [2024-05-16 07:33:25.110895] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cf45500 name Existed_Raid, state configuring 00:20:31.572 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:32.137 [2024-05-16 07:33:25.394864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.137 [2024-05-16 07:33:25.395550] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.137 [2024-05-16 07:33:25.395590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.137 [2024-05-16 07:33:25.395594] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.137 [2024-05-16 07:33:25.395602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.137 "name": "Existed_Raid", 00:20:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.137 "strip_size_kb": 64, 00:20:32.137 "state": "configuring", 00:20:32.137 "raid_level": "raid0", 00:20:32.137 "superblock": false, 00:20:32.137 "num_base_bdevs": 3, 00:20:32.137 "num_base_bdevs_discovered": 1, 00:20:32.137 "num_base_bdevs_operational": 3, 00:20:32.137 "base_bdevs_list": [ 00:20:32.137 { 00:20:32.137 "name": "BaseBdev1", 00:20:32.137 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:32.137 "is_configured": true, 00:20:32.137 "data_offset": 0, 00:20:32.137 "data_size": 65536 00:20:32.137 }, 00:20:32.137 { 00:20:32.137 "name": "BaseBdev2", 00:20:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.137 "is_configured": false, 00:20:32.137 "data_offset": 0, 00:20:32.137 "data_size": 0 00:20:32.137 }, 00:20:32.137 { 00:20:32.137 "name": "BaseBdev3", 00:20:32.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.137 "is_configured": false, 00:20:32.137 "data_offset": 0, 00:20:32.137 "data_size": 0 00:20:32.137 } 00:20:32.137 ] 00:20:32.137 }' 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.137 07:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:32.703 [2024-05-16 07:33:26.226919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.703 BaseBdev2 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:32.703 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:33.270 [ 00:20:33.270 { 00:20:33.270 "name": "BaseBdev2", 00:20:33.270 "aliases": [ 00:20:33.270 "95274298-1356-11ef-8e8f-9dd684e56d79" 00:20:33.270 ], 00:20:33.270 "product_name": "Malloc disk", 00:20:33.270 "block_size": 512, 00:20:33.270 "num_blocks": 65536, 00:20:33.270 "uuid": "95274298-1356-11ef-8e8f-9dd684e56d79", 00:20:33.270 "assigned_rate_limits": { 00:20:33.270 "rw_ios_per_sec": 0, 00:20:33.270 "rw_mbytes_per_sec": 0, 00:20:33.270 "r_mbytes_per_sec": 0, 00:20:33.270 "w_mbytes_per_sec": 0 00:20:33.270 }, 00:20:33.270 "claimed": true, 00:20:33.270 "claim_type": "exclusive_write", 00:20:33.270 "zoned": false, 00:20:33.270 "supported_io_types": { 00:20:33.270 "read": true, 00:20:33.270 "write": true, 00:20:33.270 "unmap": true, 00:20:33.270 "write_zeroes": true, 00:20:33.270 "flush": true, 00:20:33.270 "reset": true, 00:20:33.270 "compare": false, 00:20:33.270 "compare_and_write": false, 00:20:33.270 "abort": true, 00:20:33.270 "nvme_admin": false, 00:20:33.270 "nvme_io": false 00:20:33.270 }, 00:20:33.270 "memory_domains": [ 00:20:33.270 { 00:20:33.270 "dma_device_id": "system", 00:20:33.270 "dma_device_type": 1 00:20:33.270 }, 00:20:33.270 { 00:20:33.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.270 "dma_device_type": 2 00:20:33.270 } 00:20:33.270 ], 00:20:33.270 "driver_specific": {} 00:20:33.270 } 00:20:33.270 ] 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.270 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.527 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.527 "name": "Existed_Raid", 00:20:33.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.527 "strip_size_kb": 64, 00:20:33.528 "state": "configuring", 00:20:33.528 "raid_level": "raid0", 00:20:33.528 "superblock": false, 00:20:33.528 "num_base_bdevs": 3, 00:20:33.528 "num_base_bdevs_discovered": 2, 00:20:33.528 "num_base_bdevs_operational": 3, 00:20:33.528 "base_bdevs_list": [ 00:20:33.528 { 00:20:33.528 "name": "BaseBdev1", 00:20:33.528 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:33.528 "is_configured": true, 00:20:33.528 "data_offset": 0, 00:20:33.528 "data_size": 65536 00:20:33.528 }, 00:20:33.528 { 00:20:33.528 "name": "BaseBdev2", 00:20:33.528 "uuid": "95274298-1356-11ef-8e8f-9dd684e56d79", 00:20:33.528 "is_configured": true, 00:20:33.528 "data_offset": 0, 00:20:33.528 "data_size": 65536 00:20:33.528 }, 00:20:33.528 { 00:20:33.528 "name": "BaseBdev3", 00:20:33.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.528 "is_configured": false, 00:20:33.528 "data_offset": 0, 00:20:33.528 "data_size": 0 00:20:33.528 } 00:20:33.528 ] 00:20:33.528 }' 00:20:33.528 07:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.528 07:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.787 07:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:34.045 [2024-05-16 07:33:27.566849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:34.045 [2024-05-16 07:33:27.566879] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cf45a00 00:20:34.045 [2024-05-16 07:33:27.566883] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:34.045 [2024-05-16 07:33:27.566902] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cfa8ec0 00:20:34.045 [2024-05-16 07:33:27.566991] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cf45a00 00:20:34.045 [2024-05-16 07:33:27.566995] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cf45a00 00:20:34.045 [2024-05-16 07:33:27.567023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.045 BaseBdev3 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:34.045 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.612 07:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:34.612 [ 00:20:34.612 { 00:20:34.612 "name": "BaseBdev3", 00:20:34.612 "aliases": [ 00:20:34.612 "95f3b830-1356-11ef-8e8f-9dd684e56d79" 00:20:34.612 ], 00:20:34.612 "product_name": "Malloc disk", 00:20:34.612 "block_size": 512, 00:20:34.612 "num_blocks": 65536, 00:20:34.612 "uuid": "95f3b830-1356-11ef-8e8f-9dd684e56d79", 00:20:34.612 "assigned_rate_limits": { 00:20:34.612 "rw_ios_per_sec": 0, 00:20:34.612 "rw_mbytes_per_sec": 0, 00:20:34.612 "r_mbytes_per_sec": 0, 00:20:34.612 "w_mbytes_per_sec": 0 00:20:34.612 }, 00:20:34.612 "claimed": true, 00:20:34.612 "claim_type": "exclusive_write", 00:20:34.612 "zoned": false, 00:20:34.612 "supported_io_types": { 00:20:34.612 "read": true, 00:20:34.612 "write": true, 00:20:34.612 "unmap": true, 00:20:34.612 "write_zeroes": true, 00:20:34.612 "flush": true, 00:20:34.612 "reset": true, 00:20:34.612 "compare": false, 00:20:34.612 "compare_and_write": false, 00:20:34.612 "abort": true, 00:20:34.612 "nvme_admin": false, 00:20:34.612 "nvme_io": false 00:20:34.612 }, 00:20:34.612 "memory_domains": [ 00:20:34.612 { 00:20:34.612 "dma_device_id": "system", 00:20:34.612 "dma_device_type": 1 00:20:34.612 }, 00:20:34.612 { 00:20:34.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.612 "dma_device_type": 2 00:20:34.612 } 00:20:34.612 ], 00:20:34.613 "driver_specific": {} 00:20:34.613 } 00:20:34.613 ] 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.870 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.136 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.136 "name": "Existed_Raid", 00:20:35.136 "uuid": "95f3bd91-1356-11ef-8e8f-9dd684e56d79", 00:20:35.136 "strip_size_kb": 64, 00:20:35.136 "state": "online", 00:20:35.136 "raid_level": "raid0", 00:20:35.136 "superblock": false, 00:20:35.136 "num_base_bdevs": 3, 00:20:35.136 "num_base_bdevs_discovered": 3, 00:20:35.136 "num_base_bdevs_operational": 3, 00:20:35.136 "base_bdevs_list": [ 00:20:35.136 { 00:20:35.136 "name": "BaseBdev1", 00:20:35.136 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:35.136 "is_configured": true, 00:20:35.136 "data_offset": 0, 00:20:35.136 "data_size": 65536 00:20:35.136 }, 00:20:35.136 { 00:20:35.136 "name": "BaseBdev2", 00:20:35.136 "uuid": "95274298-1356-11ef-8e8f-9dd684e56d79", 00:20:35.136 "is_configured": true, 00:20:35.136 "data_offset": 0, 00:20:35.136 "data_size": 65536 00:20:35.136 }, 00:20:35.136 { 00:20:35.136 "name": "BaseBdev3", 00:20:35.136 "uuid": "95f3b830-1356-11ef-8e8f-9dd684e56d79", 00:20:35.136 "is_configured": true, 00:20:35.136 "data_offset": 0, 00:20:35.136 "data_size": 65536 00:20:35.136 } 00:20:35.136 ] 00:20:35.136 }' 00:20:35.136 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.136 07:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:35.407 07:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:35.665 [2024-05-16 07:33:29.166641] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.665 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:35.665 "name": "Existed_Raid", 00:20:35.665 "aliases": [ 00:20:35.665 "95f3bd91-1356-11ef-8e8f-9dd684e56d79" 00:20:35.665 ], 00:20:35.665 "product_name": "Raid Volume", 00:20:35.665 "block_size": 512, 00:20:35.665 "num_blocks": 196608, 00:20:35.665 "uuid": "95f3bd91-1356-11ef-8e8f-9dd684e56d79", 00:20:35.665 "assigned_rate_limits": { 00:20:35.665 "rw_ios_per_sec": 0, 00:20:35.665 "rw_mbytes_per_sec": 0, 00:20:35.665 "r_mbytes_per_sec": 0, 00:20:35.665 "w_mbytes_per_sec": 0 00:20:35.665 }, 00:20:35.665 "claimed": false, 00:20:35.665 "zoned": false, 00:20:35.666 "supported_io_types": { 00:20:35.666 "read": true, 00:20:35.666 "write": true, 00:20:35.666 "unmap": true, 00:20:35.666 "write_zeroes": true, 00:20:35.666 "flush": true, 00:20:35.666 "reset": true, 00:20:35.666 "compare": false, 00:20:35.666 "compare_and_write": false, 00:20:35.666 "abort": false, 00:20:35.666 "nvme_admin": false, 00:20:35.666 "nvme_io": false 00:20:35.666 }, 00:20:35.666 "memory_domains": [ 00:20:35.666 { 00:20:35.666 "dma_device_id": "system", 00:20:35.666 "dma_device_type": 1 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.666 "dma_device_type": 2 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "dma_device_id": "system", 00:20:35.666 "dma_device_type": 1 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.666 "dma_device_type": 2 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "dma_device_id": "system", 00:20:35.666 "dma_device_type": 1 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.666 "dma_device_type": 2 00:20:35.666 } 00:20:35.666 ], 00:20:35.666 "driver_specific": { 00:20:35.666 "raid": { 00:20:35.666 "uuid": "95f3bd91-1356-11ef-8e8f-9dd684e56d79", 00:20:35.666 "strip_size_kb": 64, 00:20:35.666 "state": "online", 00:20:35.666 "raid_level": "raid0", 00:20:35.666 "superblock": false, 00:20:35.666 "num_base_bdevs": 3, 00:20:35.666 "num_base_bdevs_discovered": 3, 00:20:35.666 "num_base_bdevs_operational": 3, 00:20:35.666 "base_bdevs_list": [ 00:20:35.666 { 00:20:35.666 "name": "BaseBdev1", 00:20:35.666 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:35.666 "is_configured": true, 00:20:35.666 "data_offset": 0, 00:20:35.666 "data_size": 65536 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "name": "BaseBdev2", 00:20:35.666 "uuid": "95274298-1356-11ef-8e8f-9dd684e56d79", 00:20:35.666 "is_configured": true, 00:20:35.666 "data_offset": 0, 00:20:35.666 "data_size": 65536 00:20:35.666 }, 00:20:35.666 { 00:20:35.666 "name": "BaseBdev3", 00:20:35.666 "uuid": "95f3b830-1356-11ef-8e8f-9dd684e56d79", 00:20:35.666 "is_configured": true, 00:20:35.666 "data_offset": 0, 00:20:35.666 "data_size": 65536 00:20:35.666 } 00:20:35.666 ] 00:20:35.666 } 00:20:35.666 } 00:20:35.666 }' 00:20:35.666 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:35.666 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:35.666 BaseBdev2 00:20:35.666 BaseBdev3' 00:20:35.666 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:35.666 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:35.666 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:35.924 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:35.924 "name": "BaseBdev1", 00:20:35.924 "aliases": [ 00:20:35.924 "93a3b86e-1356-11ef-8e8f-9dd684e56d79" 00:20:35.924 ], 00:20:35.924 "product_name": "Malloc disk", 00:20:35.924 "block_size": 512, 00:20:35.924 "num_blocks": 65536, 00:20:35.924 "uuid": "93a3b86e-1356-11ef-8e8f-9dd684e56d79", 00:20:35.924 "assigned_rate_limits": { 00:20:35.924 "rw_ios_per_sec": 0, 00:20:35.924 "rw_mbytes_per_sec": 0, 00:20:35.924 "r_mbytes_per_sec": 0, 00:20:35.924 "w_mbytes_per_sec": 0 00:20:35.924 }, 00:20:35.924 "claimed": true, 00:20:35.924 "claim_type": "exclusive_write", 00:20:35.924 "zoned": false, 00:20:35.924 "supported_io_types": { 00:20:35.924 "read": true, 00:20:35.924 "write": true, 00:20:35.924 "unmap": true, 00:20:35.924 "write_zeroes": true, 00:20:35.924 "flush": true, 00:20:35.924 "reset": true, 00:20:35.924 "compare": false, 00:20:35.924 "compare_and_write": false, 00:20:35.924 "abort": true, 00:20:35.924 "nvme_admin": false, 00:20:35.924 "nvme_io": false 00:20:35.924 }, 00:20:35.924 "memory_domains": [ 00:20:35.924 { 00:20:35.924 "dma_device_id": "system", 00:20:35.924 "dma_device_type": 1 00:20:35.924 }, 00:20:35.924 { 00:20:35.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.924 "dma_device_type": 2 00:20:35.924 } 00:20:35.924 ], 00:20:35.924 "driver_specific": {} 00:20:35.924 }' 00:20:35.924 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:35.924 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:35.925 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:35.925 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:35.925 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:36.183 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:36.444 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:36.444 "name": "BaseBdev2", 00:20:36.444 "aliases": [ 00:20:36.444 "95274298-1356-11ef-8e8f-9dd684e56d79" 00:20:36.444 ], 00:20:36.444 "product_name": "Malloc disk", 00:20:36.444 "block_size": 512, 00:20:36.444 "num_blocks": 65536, 00:20:36.444 "uuid": "95274298-1356-11ef-8e8f-9dd684e56d79", 00:20:36.444 "assigned_rate_limits": { 00:20:36.444 "rw_ios_per_sec": 0, 00:20:36.444 "rw_mbytes_per_sec": 0, 00:20:36.444 "r_mbytes_per_sec": 0, 00:20:36.444 "w_mbytes_per_sec": 0 00:20:36.444 }, 00:20:36.444 "claimed": true, 00:20:36.444 "claim_type": "exclusive_write", 00:20:36.444 "zoned": false, 00:20:36.444 "supported_io_types": { 00:20:36.444 "read": true, 00:20:36.444 "write": true, 00:20:36.444 "unmap": true, 00:20:36.444 "write_zeroes": true, 00:20:36.444 "flush": true, 00:20:36.444 "reset": true, 00:20:36.444 "compare": false, 00:20:36.444 "compare_and_write": false, 00:20:36.444 "abort": true, 00:20:36.444 "nvme_admin": false, 00:20:36.444 "nvme_io": false 00:20:36.444 }, 00:20:36.444 "memory_domains": [ 00:20:36.445 { 00:20:36.445 "dma_device_id": "system", 00:20:36.445 "dma_device_type": 1 00:20:36.445 }, 00:20:36.445 { 00:20:36.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.445 "dma_device_type": 2 00:20:36.445 } 00:20:36.445 ], 00:20:36.445 "driver_specific": {} 00:20:36.445 }' 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:36.445 07:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:36.703 "name": "BaseBdev3", 00:20:36.703 "aliases": [ 00:20:36.703 "95f3b830-1356-11ef-8e8f-9dd684e56d79" 00:20:36.703 ], 00:20:36.703 "product_name": "Malloc disk", 00:20:36.703 "block_size": 512, 00:20:36.703 "num_blocks": 65536, 00:20:36.703 "uuid": "95f3b830-1356-11ef-8e8f-9dd684e56d79", 00:20:36.703 "assigned_rate_limits": { 00:20:36.703 "rw_ios_per_sec": 0, 00:20:36.703 "rw_mbytes_per_sec": 0, 00:20:36.703 "r_mbytes_per_sec": 0, 00:20:36.703 "w_mbytes_per_sec": 0 00:20:36.703 }, 00:20:36.703 "claimed": true, 00:20:36.703 "claim_type": "exclusive_write", 00:20:36.703 "zoned": false, 00:20:36.703 "supported_io_types": { 00:20:36.703 "read": true, 00:20:36.703 "write": true, 00:20:36.703 "unmap": true, 00:20:36.703 "write_zeroes": true, 00:20:36.703 "flush": true, 00:20:36.703 "reset": true, 00:20:36.703 "compare": false, 00:20:36.703 "compare_and_write": false, 00:20:36.703 "abort": true, 00:20:36.703 "nvme_admin": false, 00:20:36.703 "nvme_io": false 00:20:36.703 }, 00:20:36.703 "memory_domains": [ 00:20:36.703 { 00:20:36.703 "dma_device_id": "system", 00:20:36.703 "dma_device_type": 1 00:20:36.703 }, 00:20:36.703 { 00:20:36.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.703 "dma_device_type": 2 00:20:36.703 } 00:20:36.703 ], 00:20:36.703 "driver_specific": {} 00:20:36.703 }' 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:36.703 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:36.962 [2024-05-16 07:33:30.298513] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:36.962 [2024-05-16 07:33:30.298538] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.962 [2024-05-16 07:33:30.298550] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.962 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.220 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.220 "name": "Existed_Raid", 00:20:37.220 "uuid": "95f3bd91-1356-11ef-8e8f-9dd684e56d79", 00:20:37.220 "strip_size_kb": 64, 00:20:37.220 "state": "offline", 00:20:37.220 "raid_level": "raid0", 00:20:37.220 "superblock": false, 00:20:37.220 "num_base_bdevs": 3, 00:20:37.220 "num_base_bdevs_discovered": 2, 00:20:37.220 "num_base_bdevs_operational": 2, 00:20:37.220 "base_bdevs_list": [ 00:20:37.220 { 00:20:37.220 "name": null, 00:20:37.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.220 "is_configured": false, 00:20:37.220 "data_offset": 0, 00:20:37.220 "data_size": 65536 00:20:37.220 }, 00:20:37.220 { 00:20:37.220 "name": "BaseBdev2", 00:20:37.220 "uuid": "95274298-1356-11ef-8e8f-9dd684e56d79", 00:20:37.220 "is_configured": true, 00:20:37.220 "data_offset": 0, 00:20:37.220 "data_size": 65536 00:20:37.220 }, 00:20:37.220 { 00:20:37.220 "name": "BaseBdev3", 00:20:37.220 "uuid": "95f3b830-1356-11ef-8e8f-9dd684e56d79", 00:20:37.220 "is_configured": true, 00:20:37.220 "data_offset": 0, 00:20:37.220 "data_size": 65536 00:20:37.220 } 00:20:37.220 ] 00:20:37.220 }' 00:20:37.220 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.220 07:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.479 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:37.479 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:37.479 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.479 07:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:37.738 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:37.738 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:37.738 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:37.996 [2024-05-16 07:33:31.499145] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:37.996 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:37.996 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:37.996 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.996 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:20:38.564 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:20:38.564 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.564 07:33:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:38.823 [2024-05-16 07:33:32.127857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:38.823 [2024-05-16 07:33:32.127886] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cf45a00 name Existed_Raid, state offline 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:38.823 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:39.082 BaseBdev2 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:39.082 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.341 07:33:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:39.599 [ 00:20:39.599 { 00:20:39.599 "name": "BaseBdev2", 00:20:39.599 "aliases": [ 00:20:39.599 "98f0c761-1356-11ef-8e8f-9dd684e56d79" 00:20:39.599 ], 00:20:39.599 "product_name": "Malloc disk", 00:20:39.599 "block_size": 512, 00:20:39.599 "num_blocks": 65536, 00:20:39.599 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:39.599 "assigned_rate_limits": { 00:20:39.599 "rw_ios_per_sec": 0, 00:20:39.599 "rw_mbytes_per_sec": 0, 00:20:39.599 "r_mbytes_per_sec": 0, 00:20:39.599 "w_mbytes_per_sec": 0 00:20:39.599 }, 00:20:39.599 "claimed": false, 00:20:39.599 "zoned": false, 00:20:39.599 "supported_io_types": { 00:20:39.599 "read": true, 00:20:39.599 "write": true, 00:20:39.599 "unmap": true, 00:20:39.599 "write_zeroes": true, 00:20:39.599 "flush": true, 00:20:39.599 "reset": true, 00:20:39.599 "compare": false, 00:20:39.599 "compare_and_write": false, 00:20:39.599 "abort": true, 00:20:39.599 "nvme_admin": false, 00:20:39.599 "nvme_io": false 00:20:39.599 }, 00:20:39.599 "memory_domains": [ 00:20:39.599 { 00:20:39.599 "dma_device_id": "system", 00:20:39.599 "dma_device_type": 1 00:20:39.599 }, 00:20:39.599 { 00:20:39.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.599 "dma_device_type": 2 00:20:39.600 } 00:20:39.600 ], 00:20:39.600 "driver_specific": {} 00:20:39.600 } 00:20:39.600 ] 00:20:39.857 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:39.857 07:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:39.857 07:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:39.857 07:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:40.114 BaseBdev3 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:40.114 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:40.372 07:33:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:40.634 [ 00:20:40.634 { 00:20:40.634 "name": "BaseBdev3", 00:20:40.634 "aliases": [ 00:20:40.634 "9976701b-1356-11ef-8e8f-9dd684e56d79" 00:20:40.634 ], 00:20:40.634 "product_name": "Malloc disk", 00:20:40.634 "block_size": 512, 00:20:40.634 "num_blocks": 65536, 00:20:40.634 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:40.634 "assigned_rate_limits": { 00:20:40.634 "rw_ios_per_sec": 0, 00:20:40.634 "rw_mbytes_per_sec": 0, 00:20:40.634 "r_mbytes_per_sec": 0, 00:20:40.634 "w_mbytes_per_sec": 0 00:20:40.634 }, 00:20:40.634 "claimed": false, 00:20:40.634 "zoned": false, 00:20:40.634 "supported_io_types": { 00:20:40.634 "read": true, 00:20:40.635 "write": true, 00:20:40.635 "unmap": true, 00:20:40.635 "write_zeroes": true, 00:20:40.635 "flush": true, 00:20:40.635 "reset": true, 00:20:40.635 "compare": false, 00:20:40.635 "compare_and_write": false, 00:20:40.635 "abort": true, 00:20:40.635 "nvme_admin": false, 00:20:40.635 "nvme_io": false 00:20:40.635 }, 00:20:40.635 "memory_domains": [ 00:20:40.635 { 00:20:40.635 "dma_device_id": "system", 00:20:40.635 "dma_device_type": 1 00:20:40.635 }, 00:20:40.635 { 00:20:40.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.635 "dma_device_type": 2 00:20:40.635 } 00:20:40.635 ], 00:20:40.635 "driver_specific": {} 00:20:40.635 } 00:20:40.635 ] 00:20:40.635 07:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:40.635 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:20:40.635 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:20:40.635 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:40.896 [2024-05-16 07:33:34.236512] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:40.896 [2024-05-16 07:33:34.236563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:40.896 [2024-05-16 07:33:34.236571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.896 [2024-05-16 07:33:34.237019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.896 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.154 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.154 "name": "Existed_Raid", 00:20:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.154 "strip_size_kb": 64, 00:20:41.154 "state": "configuring", 00:20:41.154 "raid_level": "raid0", 00:20:41.154 "superblock": false, 00:20:41.154 "num_base_bdevs": 3, 00:20:41.154 "num_base_bdevs_discovered": 2, 00:20:41.154 "num_base_bdevs_operational": 3, 00:20:41.154 "base_bdevs_list": [ 00:20:41.154 { 00:20:41.154 "name": "BaseBdev1", 00:20:41.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.154 "is_configured": false, 00:20:41.154 "data_offset": 0, 00:20:41.154 "data_size": 0 00:20:41.154 }, 00:20:41.154 { 00:20:41.154 "name": "BaseBdev2", 00:20:41.154 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:41.154 "is_configured": true, 00:20:41.154 "data_offset": 0, 00:20:41.154 "data_size": 65536 00:20:41.154 }, 00:20:41.154 { 00:20:41.154 "name": "BaseBdev3", 00:20:41.154 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:41.154 "is_configured": true, 00:20:41.154 "data_offset": 0, 00:20:41.154 "data_size": 65536 00:20:41.154 } 00:20:41.154 ] 00:20:41.154 }' 00:20:41.154 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.154 07:33:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.413 07:33:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:41.673 [2024-05-16 07:33:35.116435] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.673 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.932 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.932 "name": "Existed_Raid", 00:20:41.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.932 "strip_size_kb": 64, 00:20:41.932 "state": "configuring", 00:20:41.932 "raid_level": "raid0", 00:20:41.932 "superblock": false, 00:20:41.932 "num_base_bdevs": 3, 00:20:41.932 "num_base_bdevs_discovered": 1, 00:20:41.932 "num_base_bdevs_operational": 3, 00:20:41.932 "base_bdevs_list": [ 00:20:41.932 { 00:20:41.932 "name": "BaseBdev1", 00:20:41.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.932 "is_configured": false, 00:20:41.932 "data_offset": 0, 00:20:41.932 "data_size": 0 00:20:41.932 }, 00:20:41.932 { 00:20:41.932 "name": null, 00:20:41.932 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:41.932 "is_configured": false, 00:20:41.932 "data_offset": 0, 00:20:41.932 "data_size": 65536 00:20:41.932 }, 00:20:41.932 { 00:20:41.932 "name": "BaseBdev3", 00:20:41.932 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:41.932 "is_configured": true, 00:20:41.932 "data_offset": 0, 00:20:41.932 "data_size": 65536 00:20:41.932 } 00:20:41.932 ] 00:20:41.932 }' 00:20:41.932 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.932 07:33:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.499 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:42.499 07:33:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.499 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:20:42.499 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:42.756 [2024-05-16 07:33:36.260483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.756 BaseBdev1 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:42.756 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.014 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:43.273 [ 00:20:43.273 { 00:20:43.273 "name": "BaseBdev1", 00:20:43.273 "aliases": [ 00:20:43.273 "9b22432e-1356-11ef-8e8f-9dd684e56d79" 00:20:43.273 ], 00:20:43.273 "product_name": "Malloc disk", 00:20:43.273 "block_size": 512, 00:20:43.273 "num_blocks": 65536, 00:20:43.273 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:43.273 "assigned_rate_limits": { 00:20:43.273 "rw_ios_per_sec": 0, 00:20:43.273 "rw_mbytes_per_sec": 0, 00:20:43.273 "r_mbytes_per_sec": 0, 00:20:43.273 "w_mbytes_per_sec": 0 00:20:43.273 }, 00:20:43.273 "claimed": true, 00:20:43.273 "claim_type": "exclusive_write", 00:20:43.273 "zoned": false, 00:20:43.273 "supported_io_types": { 00:20:43.273 "read": true, 00:20:43.273 "write": true, 00:20:43.273 "unmap": true, 00:20:43.273 "write_zeroes": true, 00:20:43.273 "flush": true, 00:20:43.273 "reset": true, 00:20:43.273 "compare": false, 00:20:43.273 "compare_and_write": false, 00:20:43.273 "abort": true, 00:20:43.273 "nvme_admin": false, 00:20:43.273 "nvme_io": false 00:20:43.273 }, 00:20:43.273 "memory_domains": [ 00:20:43.273 { 00:20:43.273 "dma_device_id": "system", 00:20:43.273 "dma_device_type": 1 00:20:43.273 }, 00:20:43.273 { 00:20:43.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.273 "dma_device_type": 2 00:20:43.273 } 00:20:43.273 ], 00:20:43.273 "driver_specific": {} 00:20:43.273 } 00:20:43.273 ] 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.273 07:33:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.532 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.532 "name": "Existed_Raid", 00:20:43.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.532 "strip_size_kb": 64, 00:20:43.532 "state": "configuring", 00:20:43.532 "raid_level": "raid0", 00:20:43.532 "superblock": false, 00:20:43.532 "num_base_bdevs": 3, 00:20:43.532 "num_base_bdevs_discovered": 2, 00:20:43.532 "num_base_bdevs_operational": 3, 00:20:43.532 "base_bdevs_list": [ 00:20:43.532 { 00:20:43.532 "name": "BaseBdev1", 00:20:43.532 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:43.532 "is_configured": true, 00:20:43.532 "data_offset": 0, 00:20:43.532 "data_size": 65536 00:20:43.532 }, 00:20:43.532 { 00:20:43.532 "name": null, 00:20:43.532 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:43.532 "is_configured": false, 00:20:43.532 "data_offset": 0, 00:20:43.532 "data_size": 65536 00:20:43.532 }, 00:20:43.532 { 00:20:43.532 "name": "BaseBdev3", 00:20:43.532 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:43.532 "is_configured": true, 00:20:43.532 "data_offset": 0, 00:20:43.532 "data_size": 65536 00:20:43.532 } 00:20:43.532 ] 00:20:43.532 }' 00:20:43.532 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.532 07:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.791 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.791 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:44.050 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:44.050 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:44.309 [2024-05-16 07:33:37.760260] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.309 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.567 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.567 "name": "Existed_Raid", 00:20:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.567 "strip_size_kb": 64, 00:20:44.567 "state": "configuring", 00:20:44.567 "raid_level": "raid0", 00:20:44.567 "superblock": false, 00:20:44.567 "num_base_bdevs": 3, 00:20:44.567 "num_base_bdevs_discovered": 1, 00:20:44.567 "num_base_bdevs_operational": 3, 00:20:44.567 "base_bdevs_list": [ 00:20:44.567 { 00:20:44.567 "name": "BaseBdev1", 00:20:44.567 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:44.567 "is_configured": true, 00:20:44.567 "data_offset": 0, 00:20:44.567 "data_size": 65536 00:20:44.567 }, 00:20:44.567 { 00:20:44.567 "name": null, 00:20:44.567 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:44.567 "is_configured": false, 00:20:44.567 "data_offset": 0, 00:20:44.567 "data_size": 65536 00:20:44.567 }, 00:20:44.567 { 00:20:44.567 "name": null, 00:20:44.567 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:44.567 "is_configured": false, 00:20:44.567 "data_offset": 0, 00:20:44.567 "data_size": 65536 00:20:44.567 } 00:20:44.567 ] 00:20:44.567 }' 00:20:44.567 07:33:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.567 07:33:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.825 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.825 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:45.082 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:20:45.082 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:45.340 [2024-05-16 07:33:38.800219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.340 07:33:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.609 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.609 "name": "Existed_Raid", 00:20:45.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.609 "strip_size_kb": 64, 00:20:45.609 "state": "configuring", 00:20:45.609 "raid_level": "raid0", 00:20:45.609 "superblock": false, 00:20:45.609 "num_base_bdevs": 3, 00:20:45.609 "num_base_bdevs_discovered": 2, 00:20:45.609 "num_base_bdevs_operational": 3, 00:20:45.609 "base_bdevs_list": [ 00:20:45.609 { 00:20:45.609 "name": "BaseBdev1", 00:20:45.609 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:45.609 "is_configured": true, 00:20:45.609 "data_offset": 0, 00:20:45.609 "data_size": 65536 00:20:45.609 }, 00:20:45.609 { 00:20:45.609 "name": null, 00:20:45.609 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:45.609 "is_configured": false, 00:20:45.609 "data_offset": 0, 00:20:45.609 "data_size": 65536 00:20:45.609 }, 00:20:45.609 { 00:20:45.609 "name": "BaseBdev3", 00:20:45.609 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:45.609 "is_configured": true, 00:20:45.609 "data_offset": 0, 00:20:45.610 "data_size": 65536 00:20:45.610 } 00:20:45.610 ] 00:20:45.610 }' 00:20:45.610 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.610 07:33:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.891 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.891 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:46.149 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:20:46.149 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:46.408 [2024-05-16 07:33:39.796169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.408 07:33:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.667 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.667 "name": "Existed_Raid", 00:20:46.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.667 "strip_size_kb": 64, 00:20:46.667 "state": "configuring", 00:20:46.667 "raid_level": "raid0", 00:20:46.667 "superblock": false, 00:20:46.667 "num_base_bdevs": 3, 00:20:46.667 "num_base_bdevs_discovered": 1, 00:20:46.667 "num_base_bdevs_operational": 3, 00:20:46.667 "base_bdevs_list": [ 00:20:46.667 { 00:20:46.667 "name": null, 00:20:46.667 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:46.667 "is_configured": false, 00:20:46.667 "data_offset": 0, 00:20:46.667 "data_size": 65536 00:20:46.667 }, 00:20:46.667 { 00:20:46.667 "name": null, 00:20:46.667 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:46.667 "is_configured": false, 00:20:46.667 "data_offset": 0, 00:20:46.667 "data_size": 65536 00:20:46.667 }, 00:20:46.667 { 00:20:46.667 "name": "BaseBdev3", 00:20:46.667 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:46.667 "is_configured": true, 00:20:46.667 "data_offset": 0, 00:20:46.667 "data_size": 65536 00:20:46.667 } 00:20:46.667 ] 00:20:46.667 }' 00:20:46.667 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.667 07:33:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.926 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.926 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:47.184 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:20:47.184 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:47.444 [2024-05-16 07:33:40.944840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.444 07:33:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.703 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:47.703 "name": "Existed_Raid", 00:20:47.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.703 "strip_size_kb": 64, 00:20:47.703 "state": "configuring", 00:20:47.703 "raid_level": "raid0", 00:20:47.703 "superblock": false, 00:20:47.703 "num_base_bdevs": 3, 00:20:47.703 "num_base_bdevs_discovered": 2, 00:20:47.703 "num_base_bdevs_operational": 3, 00:20:47.703 "base_bdevs_list": [ 00:20:47.703 { 00:20:47.703 "name": null, 00:20:47.703 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:47.703 "is_configured": false, 00:20:47.703 "data_offset": 0, 00:20:47.703 "data_size": 65536 00:20:47.703 }, 00:20:47.703 { 00:20:47.703 "name": "BaseBdev2", 00:20:47.703 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:47.703 "is_configured": true, 00:20:47.703 "data_offset": 0, 00:20:47.703 "data_size": 65536 00:20:47.703 }, 00:20:47.703 { 00:20:47.703 "name": "BaseBdev3", 00:20:47.703 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:47.703 "is_configured": true, 00:20:47.703 "data_offset": 0, 00:20:47.703 "data_size": 65536 00:20:47.703 } 00:20:47.703 ] 00:20:47.703 }' 00:20:47.703 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:47.703 07:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.980 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.980 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:48.257 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:20:48.257 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.257 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:48.515 07:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9b22432e-1356-11ef-8e8f-9dd684e56d79 00:20:48.775 [2024-05-16 07:33:42.220884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:48.775 [2024-05-16 07:33:42.220913] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cf45a00 00:20:48.775 [2024-05-16 07:33:42.220917] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:48.775 [2024-05-16 07:33:42.220938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cfa8e20 00:20:48.775 [2024-05-16 07:33:42.220992] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cf45a00 00:20:48.775 [2024-05-16 07:33:42.220996] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cf45a00 00:20:48.775 [2024-05-16 07:33:42.221024] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.775 NewBaseBdev 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:48.775 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:49.033 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:49.292 [ 00:20:49.292 { 00:20:49.292 "name": "NewBaseBdev", 00:20:49.292 "aliases": [ 00:20:49.292 "9b22432e-1356-11ef-8e8f-9dd684e56d79" 00:20:49.292 ], 00:20:49.292 "product_name": "Malloc disk", 00:20:49.292 "block_size": 512, 00:20:49.292 "num_blocks": 65536, 00:20:49.292 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:49.292 "assigned_rate_limits": { 00:20:49.292 "rw_ios_per_sec": 0, 00:20:49.292 "rw_mbytes_per_sec": 0, 00:20:49.292 "r_mbytes_per_sec": 0, 00:20:49.292 "w_mbytes_per_sec": 0 00:20:49.292 }, 00:20:49.292 "claimed": true, 00:20:49.292 "claim_type": "exclusive_write", 00:20:49.292 "zoned": false, 00:20:49.292 "supported_io_types": { 00:20:49.292 "read": true, 00:20:49.292 "write": true, 00:20:49.292 "unmap": true, 00:20:49.292 "write_zeroes": true, 00:20:49.292 "flush": true, 00:20:49.292 "reset": true, 00:20:49.292 "compare": false, 00:20:49.292 "compare_and_write": false, 00:20:49.292 "abort": true, 00:20:49.292 "nvme_admin": false, 00:20:49.292 "nvme_io": false 00:20:49.292 }, 00:20:49.292 "memory_domains": [ 00:20:49.292 { 00:20:49.292 "dma_device_id": "system", 00:20:49.292 "dma_device_type": 1 00:20:49.292 }, 00:20:49.292 { 00:20:49.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.292 "dma_device_type": 2 00:20:49.292 } 00:20:49.292 ], 00:20:49.292 "driver_specific": {} 00:20:49.292 } 00:20:49.292 ] 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.292 07:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.551 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.551 "name": "Existed_Raid", 00:20:49.551 "uuid": "9eafc4cf-1356-11ef-8e8f-9dd684e56d79", 00:20:49.551 "strip_size_kb": 64, 00:20:49.551 "state": "online", 00:20:49.551 "raid_level": "raid0", 00:20:49.551 "superblock": false, 00:20:49.551 "num_base_bdevs": 3, 00:20:49.551 "num_base_bdevs_discovered": 3, 00:20:49.551 "num_base_bdevs_operational": 3, 00:20:49.551 "base_bdevs_list": [ 00:20:49.551 { 00:20:49.551 "name": "NewBaseBdev", 00:20:49.551 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:49.551 "is_configured": true, 00:20:49.551 "data_offset": 0, 00:20:49.551 "data_size": 65536 00:20:49.551 }, 00:20:49.551 { 00:20:49.551 "name": "BaseBdev2", 00:20:49.551 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:49.551 "is_configured": true, 00:20:49.551 "data_offset": 0, 00:20:49.551 "data_size": 65536 00:20:49.551 }, 00:20:49.551 { 00:20:49.551 "name": "BaseBdev3", 00:20:49.551 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:49.551 "is_configured": true, 00:20:49.551 "data_offset": 0, 00:20:49.552 "data_size": 65536 00:20:49.552 } 00:20:49.552 ] 00:20:49.552 }' 00:20:49.552 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.552 07:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:50.121 [2024-05-16 07:33:43.620765] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:50.121 "name": "Existed_Raid", 00:20:50.121 "aliases": [ 00:20:50.121 "9eafc4cf-1356-11ef-8e8f-9dd684e56d79" 00:20:50.121 ], 00:20:50.121 "product_name": "Raid Volume", 00:20:50.121 "block_size": 512, 00:20:50.121 "num_blocks": 196608, 00:20:50.121 "uuid": "9eafc4cf-1356-11ef-8e8f-9dd684e56d79", 00:20:50.121 "assigned_rate_limits": { 00:20:50.121 "rw_ios_per_sec": 0, 00:20:50.121 "rw_mbytes_per_sec": 0, 00:20:50.121 "r_mbytes_per_sec": 0, 00:20:50.121 "w_mbytes_per_sec": 0 00:20:50.121 }, 00:20:50.121 "claimed": false, 00:20:50.121 "zoned": false, 00:20:50.121 "supported_io_types": { 00:20:50.121 "read": true, 00:20:50.121 "write": true, 00:20:50.121 "unmap": true, 00:20:50.121 "write_zeroes": true, 00:20:50.121 "flush": true, 00:20:50.121 "reset": true, 00:20:50.121 "compare": false, 00:20:50.121 "compare_and_write": false, 00:20:50.121 "abort": false, 00:20:50.121 "nvme_admin": false, 00:20:50.121 "nvme_io": false 00:20:50.121 }, 00:20:50.121 "memory_domains": [ 00:20:50.121 { 00:20:50.121 "dma_device_id": "system", 00:20:50.121 "dma_device_type": 1 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.121 "dma_device_type": 2 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "dma_device_id": "system", 00:20:50.121 "dma_device_type": 1 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.121 "dma_device_type": 2 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "dma_device_id": "system", 00:20:50.121 "dma_device_type": 1 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.121 "dma_device_type": 2 00:20:50.121 } 00:20:50.121 ], 00:20:50.121 "driver_specific": { 00:20:50.121 "raid": { 00:20:50.121 "uuid": "9eafc4cf-1356-11ef-8e8f-9dd684e56d79", 00:20:50.121 "strip_size_kb": 64, 00:20:50.121 "state": "online", 00:20:50.121 "raid_level": "raid0", 00:20:50.121 "superblock": false, 00:20:50.121 "num_base_bdevs": 3, 00:20:50.121 "num_base_bdevs_discovered": 3, 00:20:50.121 "num_base_bdevs_operational": 3, 00:20:50.121 "base_bdevs_list": [ 00:20:50.121 { 00:20:50.121 "name": "NewBaseBdev", 00:20:50.121 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:50.121 "is_configured": true, 00:20:50.121 "data_offset": 0, 00:20:50.121 "data_size": 65536 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "name": "BaseBdev2", 00:20:50.121 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:50.121 "is_configured": true, 00:20:50.121 "data_offset": 0, 00:20:50.121 "data_size": 65536 00:20:50.121 }, 00:20:50.121 { 00:20:50.121 "name": "BaseBdev3", 00:20:50.121 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:50.121 "is_configured": true, 00:20:50.121 "data_offset": 0, 00:20:50.121 "data_size": 65536 00:20:50.121 } 00:20:50.121 ] 00:20:50.121 } 00:20:50.121 } 00:20:50.121 }' 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:20:50.121 BaseBdev2 00:20:50.121 BaseBdev3' 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:50.121 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:50.383 "name": "NewBaseBdev", 00:20:50.383 "aliases": [ 00:20:50.383 "9b22432e-1356-11ef-8e8f-9dd684e56d79" 00:20:50.383 ], 00:20:50.383 "product_name": "Malloc disk", 00:20:50.383 "block_size": 512, 00:20:50.383 "num_blocks": 65536, 00:20:50.383 "uuid": "9b22432e-1356-11ef-8e8f-9dd684e56d79", 00:20:50.383 "assigned_rate_limits": { 00:20:50.383 "rw_ios_per_sec": 0, 00:20:50.383 "rw_mbytes_per_sec": 0, 00:20:50.383 "r_mbytes_per_sec": 0, 00:20:50.383 "w_mbytes_per_sec": 0 00:20:50.383 }, 00:20:50.383 "claimed": true, 00:20:50.383 "claim_type": "exclusive_write", 00:20:50.383 "zoned": false, 00:20:50.383 "supported_io_types": { 00:20:50.383 "read": true, 00:20:50.383 "write": true, 00:20:50.383 "unmap": true, 00:20:50.383 "write_zeroes": true, 00:20:50.383 "flush": true, 00:20:50.383 "reset": true, 00:20:50.383 "compare": false, 00:20:50.383 "compare_and_write": false, 00:20:50.383 "abort": true, 00:20:50.383 "nvme_admin": false, 00:20:50.383 "nvme_io": false 00:20:50.383 }, 00:20:50.383 "memory_domains": [ 00:20:50.383 { 00:20:50.383 "dma_device_id": "system", 00:20:50.383 "dma_device_type": 1 00:20:50.383 }, 00:20:50.383 { 00:20:50.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.383 "dma_device_type": 2 00:20:50.383 } 00:20:50.383 ], 00:20:50.383 "driver_specific": {} 00:20:50.383 }' 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:50.383 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:50.644 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:50.644 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:50.644 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:50.644 07:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:50.903 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:50.903 "name": "BaseBdev2", 00:20:50.903 "aliases": [ 00:20:50.903 "98f0c761-1356-11ef-8e8f-9dd684e56d79" 00:20:50.903 ], 00:20:50.903 "product_name": "Malloc disk", 00:20:50.903 "block_size": 512, 00:20:50.903 "num_blocks": 65536, 00:20:50.903 "uuid": "98f0c761-1356-11ef-8e8f-9dd684e56d79", 00:20:50.903 "assigned_rate_limits": { 00:20:50.903 "rw_ios_per_sec": 0, 00:20:50.903 "rw_mbytes_per_sec": 0, 00:20:50.903 "r_mbytes_per_sec": 0, 00:20:50.903 "w_mbytes_per_sec": 0 00:20:50.903 }, 00:20:50.903 "claimed": true, 00:20:50.903 "claim_type": "exclusive_write", 00:20:50.903 "zoned": false, 00:20:50.904 "supported_io_types": { 00:20:50.904 "read": true, 00:20:50.904 "write": true, 00:20:50.904 "unmap": true, 00:20:50.904 "write_zeroes": true, 00:20:50.904 "flush": true, 00:20:50.904 "reset": true, 00:20:50.904 "compare": false, 00:20:50.904 "compare_and_write": false, 00:20:50.904 "abort": true, 00:20:50.904 "nvme_admin": false, 00:20:50.904 "nvme_io": false 00:20:50.904 }, 00:20:50.904 "memory_domains": [ 00:20:50.904 { 00:20:50.904 "dma_device_id": "system", 00:20:50.904 "dma_device_type": 1 00:20:50.904 }, 00:20:50.904 { 00:20:50.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.904 "dma_device_type": 2 00:20:50.904 } 00:20:50.904 ], 00:20:50.904 "driver_specific": {} 00:20:50.904 }' 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:50.904 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:20:51.165 "name": "BaseBdev3", 00:20:51.165 "aliases": [ 00:20:51.165 "9976701b-1356-11ef-8e8f-9dd684e56d79" 00:20:51.165 ], 00:20:51.165 "product_name": "Malloc disk", 00:20:51.165 "block_size": 512, 00:20:51.165 "num_blocks": 65536, 00:20:51.165 "uuid": "9976701b-1356-11ef-8e8f-9dd684e56d79", 00:20:51.165 "assigned_rate_limits": { 00:20:51.165 "rw_ios_per_sec": 0, 00:20:51.165 "rw_mbytes_per_sec": 0, 00:20:51.165 "r_mbytes_per_sec": 0, 00:20:51.165 "w_mbytes_per_sec": 0 00:20:51.165 }, 00:20:51.165 "claimed": true, 00:20:51.165 "claim_type": "exclusive_write", 00:20:51.165 "zoned": false, 00:20:51.165 "supported_io_types": { 00:20:51.165 "read": true, 00:20:51.165 "write": true, 00:20:51.165 "unmap": true, 00:20:51.165 "write_zeroes": true, 00:20:51.165 "flush": true, 00:20:51.165 "reset": true, 00:20:51.165 "compare": false, 00:20:51.165 "compare_and_write": false, 00:20:51.165 "abort": true, 00:20:51.165 "nvme_admin": false, 00:20:51.165 "nvme_io": false 00:20:51.165 }, 00:20:51.165 "memory_domains": [ 00:20:51.165 { 00:20:51.165 "dma_device_id": "system", 00:20:51.165 "dma_device_type": 1 00:20:51.165 }, 00:20:51.165 { 00:20:51.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.165 "dma_device_type": 2 00:20:51.165 } 00:20:51.165 ], 00:20:51.165 "driver_specific": {} 00:20:51.165 }' 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:20:51.165 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:51.427 [2024-05-16 07:33:44.864659] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.427 [2024-05-16 07:33:44.864685] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.428 [2024-05-16 07:33:44.864703] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.428 [2024-05-16 07:33:44.864715] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.428 [2024-05-16 07:33:44.864720] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cf45a00 name Existed_Raid, state offline 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 52041 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 52041 ']' 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 52041 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 52041 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:20:51.428 killing process with pid 52041 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52041' 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 52041 00:20:51.428 [2024-05-16 07:33:44.893866] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.428 07:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 52041 00:20:51.428 [2024-05-16 07:33:44.908186] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.686 07:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:20:51.686 00:20:51.686 real 0m24.474s 00:20:51.686 user 0m44.758s 00:20:51.686 sys 0m3.440s 00:20:51.687 ************************************ 00:20:51.687 END TEST raid_state_function_test 00:20:51.687 ************************************ 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.687 07:33:45 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:20:51.687 07:33:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:51.687 07:33:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:51.687 07:33:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.687 ************************************ 00:20:51.687 START TEST raid_state_function_test_sb 00:20:51.687 ************************************ 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=52770 00:20:51.687 Process raid pid: 52770 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 52770' 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 52770 /var/tmp/spdk-raid.sock 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 52770 ']' 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.687 07:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.687 [2024-05-16 07:33:45.135654] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:51.687 [2024-05-16 07:33:45.135839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:20:52.252 EAL: TSC is not safe to use in SMP mode 00:20:52.252 EAL: TSC is not invariant 00:20:52.252 [2024-05-16 07:33:45.636897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.252 [2024-05-16 07:33:45.748802] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:20:52.252 [2024-05-16 07:33:45.751455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.252 [2024-05-16 07:33:45.752489] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.252 [2024-05-16 07:33:45.752515] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.820 07:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.820 07:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:20:52.820 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:53.079 [2024-05-16 07:33:46.478316] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.079 [2024-05-16 07:33:46.478379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.079 [2024-05-16 07:33:46.478384] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.079 [2024-05-16 07:33:46.478393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.079 [2024-05-16 07:33:46.478397] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:53.079 [2024-05-16 07:33:46.478404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.079 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.338 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:53.338 "name": "Existed_Raid", 00:20:53.338 "uuid": "a13964f0-1356-11ef-8e8f-9dd684e56d79", 00:20:53.338 "strip_size_kb": 64, 00:20:53.338 "state": "configuring", 00:20:53.338 "raid_level": "raid0", 00:20:53.338 "superblock": true, 00:20:53.338 "num_base_bdevs": 3, 00:20:53.338 "num_base_bdevs_discovered": 0, 00:20:53.338 "num_base_bdevs_operational": 3, 00:20:53.338 "base_bdevs_list": [ 00:20:53.338 { 00:20:53.338 "name": "BaseBdev1", 00:20:53.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.338 "is_configured": false, 00:20:53.338 "data_offset": 0, 00:20:53.338 "data_size": 0 00:20:53.338 }, 00:20:53.338 { 00:20:53.338 "name": "BaseBdev2", 00:20:53.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.338 "is_configured": false, 00:20:53.338 "data_offset": 0, 00:20:53.338 "data_size": 0 00:20:53.338 }, 00:20:53.338 { 00:20:53.338 "name": "BaseBdev3", 00:20:53.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.338 "is_configured": false, 00:20:53.338 "data_offset": 0, 00:20:53.338 "data_size": 0 00:20:53.338 } 00:20:53.338 ] 00:20:53.338 }' 00:20:53.338 07:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:53.338 07:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 07:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:53.855 [2024-05-16 07:33:47.282251] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:53.855 [2024-05-16 07:33:47.282276] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d27c500 name Existed_Raid, state configuring 00:20:53.855 07:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:54.114 [2024-05-16 07:33:47.534254] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:54.114 [2024-05-16 07:33:47.534309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:54.114 [2024-05-16 07:33:47.534314] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:54.114 [2024-05-16 07:33:47.534322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:54.114 [2024-05-16 07:33:47.534325] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:54.114 [2024-05-16 07:33:47.534332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:54.114 07:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:54.373 [2024-05-16 07:33:47.807151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.373 BaseBdev1 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:54.373 07:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.631 07:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:54.889 [ 00:20:54.889 { 00:20:54.889 "name": "BaseBdev1", 00:20:54.889 "aliases": [ 00:20:54.889 "a2040586-1356-11ef-8e8f-9dd684e56d79" 00:20:54.889 ], 00:20:54.889 "product_name": "Malloc disk", 00:20:54.889 "block_size": 512, 00:20:54.889 "num_blocks": 65536, 00:20:54.889 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:20:54.889 "assigned_rate_limits": { 00:20:54.889 "rw_ios_per_sec": 0, 00:20:54.889 "rw_mbytes_per_sec": 0, 00:20:54.889 "r_mbytes_per_sec": 0, 00:20:54.889 "w_mbytes_per_sec": 0 00:20:54.889 }, 00:20:54.889 "claimed": true, 00:20:54.889 "claim_type": "exclusive_write", 00:20:54.889 "zoned": false, 00:20:54.889 "supported_io_types": { 00:20:54.889 "read": true, 00:20:54.889 "write": true, 00:20:54.889 "unmap": true, 00:20:54.889 "write_zeroes": true, 00:20:54.889 "flush": true, 00:20:54.889 "reset": true, 00:20:54.889 "compare": false, 00:20:54.889 "compare_and_write": false, 00:20:54.889 "abort": true, 00:20:54.889 "nvme_admin": false, 00:20:54.889 "nvme_io": false 00:20:54.889 }, 00:20:54.889 "memory_domains": [ 00:20:54.889 { 00:20:54.889 "dma_device_id": "system", 00:20:54.889 "dma_device_type": 1 00:20:54.889 }, 00:20:54.889 { 00:20:54.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.889 "dma_device_type": 2 00:20:54.889 } 00:20:54.889 ], 00:20:54.889 "driver_specific": {} 00:20:54.889 } 00:20:54.889 ] 00:20:54.889 07:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.890 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.148 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.148 "name": "Existed_Raid", 00:20:55.148 "uuid": "a1da8499-1356-11ef-8e8f-9dd684e56d79", 00:20:55.148 "strip_size_kb": 64, 00:20:55.148 "state": "configuring", 00:20:55.148 "raid_level": "raid0", 00:20:55.148 "superblock": true, 00:20:55.148 "num_base_bdevs": 3, 00:20:55.148 "num_base_bdevs_discovered": 1, 00:20:55.148 "num_base_bdevs_operational": 3, 00:20:55.148 "base_bdevs_list": [ 00:20:55.148 { 00:20:55.148 "name": "BaseBdev1", 00:20:55.148 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:20:55.148 "is_configured": true, 00:20:55.148 "data_offset": 2048, 00:20:55.148 "data_size": 63488 00:20:55.148 }, 00:20:55.148 { 00:20:55.148 "name": "BaseBdev2", 00:20:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.148 "is_configured": false, 00:20:55.148 "data_offset": 0, 00:20:55.148 "data_size": 0 00:20:55.148 }, 00:20:55.148 { 00:20:55.148 "name": "BaseBdev3", 00:20:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.148 "is_configured": false, 00:20:55.148 "data_offset": 0, 00:20:55.148 "data_size": 0 00:20:55.148 } 00:20:55.148 ] 00:20:55.148 }' 00:20:55.148 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.148 07:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.406 07:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:55.665 [2024-05-16 07:33:49.094204] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:55.665 [2024-05-16 07:33:49.094246] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d27c500 name Existed_Raid, state configuring 00:20:55.665 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:55.923 [2024-05-16 07:33:49.426220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.923 [2024-05-16 07:33:49.426918] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.923 [2024-05-16 07:33:49.426960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.923 [2024-05-16 07:33:49.426965] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:55.923 [2024-05-16 07:33:49.426987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.923 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.182 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.182 "name": "Existed_Raid", 00:20:56.182 "uuid": "a2fb3549-1356-11ef-8e8f-9dd684e56d79", 00:20:56.182 "strip_size_kb": 64, 00:20:56.182 "state": "configuring", 00:20:56.182 "raid_level": "raid0", 00:20:56.182 "superblock": true, 00:20:56.182 "num_base_bdevs": 3, 00:20:56.182 "num_base_bdevs_discovered": 1, 00:20:56.182 "num_base_bdevs_operational": 3, 00:20:56.182 "base_bdevs_list": [ 00:20:56.182 { 00:20:56.182 "name": "BaseBdev1", 00:20:56.182 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:20:56.182 "is_configured": true, 00:20:56.182 "data_offset": 2048, 00:20:56.182 "data_size": 63488 00:20:56.182 }, 00:20:56.182 { 00:20:56.182 "name": "BaseBdev2", 00:20:56.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.182 "is_configured": false, 00:20:56.182 "data_offset": 0, 00:20:56.182 "data_size": 0 00:20:56.182 }, 00:20:56.182 { 00:20:56.183 "name": "BaseBdev3", 00:20:56.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.183 "is_configured": false, 00:20:56.183 "data_offset": 0, 00:20:56.183 "data_size": 0 00:20:56.183 } 00:20:56.183 ] 00:20:56.183 }' 00:20:56.183 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.183 07:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.441 07:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:56.699 [2024-05-16 07:33:50.186283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.699 BaseBdev2 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:56.699 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.958 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:57.527 [ 00:20:57.527 { 00:20:57.527 "name": "BaseBdev2", 00:20:57.527 "aliases": [ 00:20:57.527 "a36f2b43-1356-11ef-8e8f-9dd684e56d79" 00:20:57.527 ], 00:20:57.527 "product_name": "Malloc disk", 00:20:57.527 "block_size": 512, 00:20:57.527 "num_blocks": 65536, 00:20:57.527 "uuid": "a36f2b43-1356-11ef-8e8f-9dd684e56d79", 00:20:57.527 "assigned_rate_limits": { 00:20:57.527 "rw_ios_per_sec": 0, 00:20:57.527 "rw_mbytes_per_sec": 0, 00:20:57.527 "r_mbytes_per_sec": 0, 00:20:57.527 "w_mbytes_per_sec": 0 00:20:57.527 }, 00:20:57.527 "claimed": true, 00:20:57.527 "claim_type": "exclusive_write", 00:20:57.527 "zoned": false, 00:20:57.527 "supported_io_types": { 00:20:57.527 "read": true, 00:20:57.527 "write": true, 00:20:57.527 "unmap": true, 00:20:57.527 "write_zeroes": true, 00:20:57.527 "flush": true, 00:20:57.527 "reset": true, 00:20:57.527 "compare": false, 00:20:57.527 "compare_and_write": false, 00:20:57.527 "abort": true, 00:20:57.528 "nvme_admin": false, 00:20:57.528 "nvme_io": false 00:20:57.528 }, 00:20:57.528 "memory_domains": [ 00:20:57.528 { 00:20:57.528 "dma_device_id": "system", 00:20:57.528 "dma_device_type": 1 00:20:57.528 }, 00:20:57.528 { 00:20:57.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.528 "dma_device_type": 2 00:20:57.528 } 00:20:57.528 ], 00:20:57.528 "driver_specific": {} 00:20:57.528 } 00:20:57.528 ] 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.528 07:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.786 07:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.786 "name": "Existed_Raid", 00:20:57.786 "uuid": "a2fb3549-1356-11ef-8e8f-9dd684e56d79", 00:20:57.786 "strip_size_kb": 64, 00:20:57.786 "state": "configuring", 00:20:57.786 "raid_level": "raid0", 00:20:57.786 "superblock": true, 00:20:57.786 "num_base_bdevs": 3, 00:20:57.786 "num_base_bdevs_discovered": 2, 00:20:57.786 "num_base_bdevs_operational": 3, 00:20:57.786 "base_bdevs_list": [ 00:20:57.786 { 00:20:57.786 "name": "BaseBdev1", 00:20:57.786 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:20:57.786 "is_configured": true, 00:20:57.786 "data_offset": 2048, 00:20:57.786 "data_size": 63488 00:20:57.786 }, 00:20:57.786 { 00:20:57.786 "name": "BaseBdev2", 00:20:57.786 "uuid": "a36f2b43-1356-11ef-8e8f-9dd684e56d79", 00:20:57.786 "is_configured": true, 00:20:57.786 "data_offset": 2048, 00:20:57.786 "data_size": 63488 00:20:57.786 }, 00:20:57.786 { 00:20:57.786 "name": "BaseBdev3", 00:20:57.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.786 "is_configured": false, 00:20:57.786 "data_offset": 0, 00:20:57.786 "data_size": 0 00:20:57.786 } 00:20:57.786 ] 00:20:57.786 }' 00:20:57.786 07:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.786 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.044 07:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:58.302 [2024-05-16 07:33:51.706294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:58.302 [2024-05-16 07:33:51.706385] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d27ca00 00:20:58.302 [2024-05-16 07:33:51.706396] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:58.302 [2024-05-16 07:33:51.706428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d2dfec0 00:20:58.302 [2024-05-16 07:33:51.706487] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d27ca00 00:20:58.302 [2024-05-16 07:33:51.706493] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d27ca00 00:20:58.302 [2024-05-16 07:33:51.706523] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.302 BaseBdev3 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:58.302 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:58.560 07:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:58.819 [ 00:20:58.819 { 00:20:58.819 "name": "BaseBdev3", 00:20:58.819 "aliases": [ 00:20:58.819 "a45719c7-1356-11ef-8e8f-9dd684e56d79" 00:20:58.819 ], 00:20:58.819 "product_name": "Malloc disk", 00:20:58.819 "block_size": 512, 00:20:58.819 "num_blocks": 65536, 00:20:58.819 "uuid": "a45719c7-1356-11ef-8e8f-9dd684e56d79", 00:20:58.819 "assigned_rate_limits": { 00:20:58.819 "rw_ios_per_sec": 0, 00:20:58.819 "rw_mbytes_per_sec": 0, 00:20:58.819 "r_mbytes_per_sec": 0, 00:20:58.819 "w_mbytes_per_sec": 0 00:20:58.819 }, 00:20:58.819 "claimed": true, 00:20:58.819 "claim_type": "exclusive_write", 00:20:58.819 "zoned": false, 00:20:58.819 "supported_io_types": { 00:20:58.819 "read": true, 00:20:58.819 "write": true, 00:20:58.819 "unmap": true, 00:20:58.819 "write_zeroes": true, 00:20:58.819 "flush": true, 00:20:58.819 "reset": true, 00:20:58.819 "compare": false, 00:20:58.819 "compare_and_write": false, 00:20:58.819 "abort": true, 00:20:58.819 "nvme_admin": false, 00:20:58.819 "nvme_io": false 00:20:58.819 }, 00:20:58.819 "memory_domains": [ 00:20:58.819 { 00:20:58.819 "dma_device_id": "system", 00:20:58.819 "dma_device_type": 1 00:20:58.819 }, 00:20:58.819 { 00:20:58.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.819 "dma_device_type": 2 00:20:58.819 } 00:20:58.819 ], 00:20:58.819 "driver_specific": {} 00:20:58.819 } 00:20:58.819 ] 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.819 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.076 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.076 "name": "Existed_Raid", 00:20:59.076 "uuid": "a2fb3549-1356-11ef-8e8f-9dd684e56d79", 00:20:59.076 "strip_size_kb": 64, 00:20:59.076 "state": "online", 00:20:59.076 "raid_level": "raid0", 00:20:59.076 "superblock": true, 00:20:59.076 "num_base_bdevs": 3, 00:20:59.076 "num_base_bdevs_discovered": 3, 00:20:59.076 "num_base_bdevs_operational": 3, 00:20:59.076 "base_bdevs_list": [ 00:20:59.076 { 00:20:59.076 "name": "BaseBdev1", 00:20:59.076 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:20:59.076 "is_configured": true, 00:20:59.076 "data_offset": 2048, 00:20:59.076 "data_size": 63488 00:20:59.076 }, 00:20:59.076 { 00:20:59.076 "name": "BaseBdev2", 00:20:59.076 "uuid": "a36f2b43-1356-11ef-8e8f-9dd684e56d79", 00:20:59.076 "is_configured": true, 00:20:59.076 "data_offset": 2048, 00:20:59.076 "data_size": 63488 00:20:59.076 }, 00:20:59.076 { 00:20:59.076 "name": "BaseBdev3", 00:20:59.076 "uuid": "a45719c7-1356-11ef-8e8f-9dd684e56d79", 00:20:59.076 "is_configured": true, 00:20:59.076 "data_offset": 2048, 00:20:59.076 "data_size": 63488 00:20:59.076 } 00:20:59.076 ] 00:20:59.076 }' 00:20:59.076 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.076 07:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:20:59.333 07:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:59.898 [2024-05-16 07:33:53.162126] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.898 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:20:59.898 "name": "Existed_Raid", 00:20:59.898 "aliases": [ 00:20:59.898 "a2fb3549-1356-11ef-8e8f-9dd684e56d79" 00:20:59.898 ], 00:20:59.898 "product_name": "Raid Volume", 00:20:59.898 "block_size": 512, 00:20:59.898 "num_blocks": 190464, 00:20:59.898 "uuid": "a2fb3549-1356-11ef-8e8f-9dd684e56d79", 00:20:59.898 "assigned_rate_limits": { 00:20:59.898 "rw_ios_per_sec": 0, 00:20:59.898 "rw_mbytes_per_sec": 0, 00:20:59.898 "r_mbytes_per_sec": 0, 00:20:59.898 "w_mbytes_per_sec": 0 00:20:59.898 }, 00:20:59.898 "claimed": false, 00:20:59.898 "zoned": false, 00:20:59.898 "supported_io_types": { 00:20:59.898 "read": true, 00:20:59.898 "write": true, 00:20:59.898 "unmap": true, 00:20:59.898 "write_zeroes": true, 00:20:59.898 "flush": true, 00:20:59.898 "reset": true, 00:20:59.898 "compare": false, 00:20:59.898 "compare_and_write": false, 00:20:59.898 "abort": false, 00:20:59.898 "nvme_admin": false, 00:20:59.898 "nvme_io": false 00:20:59.898 }, 00:20:59.898 "memory_domains": [ 00:20:59.898 { 00:20:59.898 "dma_device_id": "system", 00:20:59.898 "dma_device_type": 1 00:20:59.898 }, 00:20:59.898 { 00:20:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.898 "dma_device_type": 2 00:20:59.898 }, 00:20:59.898 { 00:20:59.898 "dma_device_id": "system", 00:20:59.898 "dma_device_type": 1 00:20:59.898 }, 00:20:59.898 { 00:20:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.898 "dma_device_type": 2 00:20:59.898 }, 00:20:59.898 { 00:20:59.898 "dma_device_id": "system", 00:20:59.898 "dma_device_type": 1 00:20:59.898 }, 00:20:59.898 { 00:20:59.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.898 "dma_device_type": 2 00:20:59.898 } 00:20:59.898 ], 00:20:59.898 "driver_specific": { 00:20:59.898 "raid": { 00:20:59.898 "uuid": "a2fb3549-1356-11ef-8e8f-9dd684e56d79", 00:20:59.898 "strip_size_kb": 64, 00:20:59.898 "state": "online", 00:20:59.898 "raid_level": "raid0", 00:20:59.898 "superblock": true, 00:20:59.898 "num_base_bdevs": 3, 00:20:59.898 "num_base_bdevs_discovered": 3, 00:20:59.898 "num_base_bdevs_operational": 3, 00:20:59.898 "base_bdevs_list": [ 00:20:59.898 { 00:20:59.898 "name": "BaseBdev1", 00:20:59.899 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:20:59.899 "is_configured": true, 00:20:59.899 "data_offset": 2048, 00:20:59.899 "data_size": 63488 00:20:59.899 }, 00:20:59.899 { 00:20:59.899 "name": "BaseBdev2", 00:20:59.899 "uuid": "a36f2b43-1356-11ef-8e8f-9dd684e56d79", 00:20:59.899 "is_configured": true, 00:20:59.899 "data_offset": 2048, 00:20:59.899 "data_size": 63488 00:20:59.899 }, 00:20:59.899 { 00:20:59.899 "name": "BaseBdev3", 00:20:59.899 "uuid": "a45719c7-1356-11ef-8e8f-9dd684e56d79", 00:20:59.899 "is_configured": true, 00:20:59.899 "data_offset": 2048, 00:20:59.899 "data_size": 63488 00:20:59.899 } 00:20:59.899 ] 00:20:59.899 } 00:20:59.899 } 00:20:59.899 }' 00:20:59.899 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:59.899 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:20:59.899 BaseBdev2 00:20:59.899 BaseBdev3' 00:20:59.899 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:20:59.899 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:59.899 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:00.158 "name": "BaseBdev1", 00:21:00.158 "aliases": [ 00:21:00.158 "a2040586-1356-11ef-8e8f-9dd684e56d79" 00:21:00.158 ], 00:21:00.158 "product_name": "Malloc disk", 00:21:00.158 "block_size": 512, 00:21:00.158 "num_blocks": 65536, 00:21:00.158 "uuid": "a2040586-1356-11ef-8e8f-9dd684e56d79", 00:21:00.158 "assigned_rate_limits": { 00:21:00.158 "rw_ios_per_sec": 0, 00:21:00.158 "rw_mbytes_per_sec": 0, 00:21:00.158 "r_mbytes_per_sec": 0, 00:21:00.158 "w_mbytes_per_sec": 0 00:21:00.158 }, 00:21:00.158 "claimed": true, 00:21:00.158 "claim_type": "exclusive_write", 00:21:00.158 "zoned": false, 00:21:00.158 "supported_io_types": { 00:21:00.158 "read": true, 00:21:00.158 "write": true, 00:21:00.158 "unmap": true, 00:21:00.158 "write_zeroes": true, 00:21:00.158 "flush": true, 00:21:00.158 "reset": true, 00:21:00.158 "compare": false, 00:21:00.158 "compare_and_write": false, 00:21:00.158 "abort": true, 00:21:00.158 "nvme_admin": false, 00:21:00.158 "nvme_io": false 00:21:00.158 }, 00:21:00.158 "memory_domains": [ 00:21:00.158 { 00:21:00.158 "dma_device_id": "system", 00:21:00.158 "dma_device_type": 1 00:21:00.158 }, 00:21:00.158 { 00:21:00.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.158 "dma_device_type": 2 00:21:00.158 } 00:21:00.158 ], 00:21:00.158 "driver_specific": {} 00:21:00.158 }' 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:00.158 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:00.429 "name": "BaseBdev2", 00:21:00.429 "aliases": [ 00:21:00.429 "a36f2b43-1356-11ef-8e8f-9dd684e56d79" 00:21:00.429 ], 00:21:00.429 "product_name": "Malloc disk", 00:21:00.429 "block_size": 512, 00:21:00.429 "num_blocks": 65536, 00:21:00.429 "uuid": "a36f2b43-1356-11ef-8e8f-9dd684e56d79", 00:21:00.429 "assigned_rate_limits": { 00:21:00.429 "rw_ios_per_sec": 0, 00:21:00.429 "rw_mbytes_per_sec": 0, 00:21:00.429 "r_mbytes_per_sec": 0, 00:21:00.429 "w_mbytes_per_sec": 0 00:21:00.429 }, 00:21:00.429 "claimed": true, 00:21:00.429 "claim_type": "exclusive_write", 00:21:00.429 "zoned": false, 00:21:00.429 "supported_io_types": { 00:21:00.429 "read": true, 00:21:00.429 "write": true, 00:21:00.429 "unmap": true, 00:21:00.429 "write_zeroes": true, 00:21:00.429 "flush": true, 00:21:00.429 "reset": true, 00:21:00.429 "compare": false, 00:21:00.429 "compare_and_write": false, 00:21:00.429 "abort": true, 00:21:00.429 "nvme_admin": false, 00:21:00.429 "nvme_io": false 00:21:00.429 }, 00:21:00.429 "memory_domains": [ 00:21:00.429 { 00:21:00.429 "dma_device_id": "system", 00:21:00.429 "dma_device_type": 1 00:21:00.429 }, 00:21:00.429 { 00:21:00.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.429 "dma_device_type": 2 00:21:00.429 } 00:21:00.429 ], 00:21:00.429 "driver_specific": {} 00:21:00.429 }' 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:00.429 07:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:00.686 "name": "BaseBdev3", 00:21:00.686 "aliases": [ 00:21:00.686 "a45719c7-1356-11ef-8e8f-9dd684e56d79" 00:21:00.686 ], 00:21:00.686 "product_name": "Malloc disk", 00:21:00.686 "block_size": 512, 00:21:00.686 "num_blocks": 65536, 00:21:00.686 "uuid": "a45719c7-1356-11ef-8e8f-9dd684e56d79", 00:21:00.686 "assigned_rate_limits": { 00:21:00.686 "rw_ios_per_sec": 0, 00:21:00.686 "rw_mbytes_per_sec": 0, 00:21:00.686 "r_mbytes_per_sec": 0, 00:21:00.686 "w_mbytes_per_sec": 0 00:21:00.686 }, 00:21:00.686 "claimed": true, 00:21:00.686 "claim_type": "exclusive_write", 00:21:00.686 "zoned": false, 00:21:00.686 "supported_io_types": { 00:21:00.686 "read": true, 00:21:00.686 "write": true, 00:21:00.686 "unmap": true, 00:21:00.686 "write_zeroes": true, 00:21:00.686 "flush": true, 00:21:00.686 "reset": true, 00:21:00.686 "compare": false, 00:21:00.686 "compare_and_write": false, 00:21:00.686 "abort": true, 00:21:00.686 "nvme_admin": false, 00:21:00.686 "nvme_io": false 00:21:00.686 }, 00:21:00.686 "memory_domains": [ 00:21:00.686 { 00:21:00.686 "dma_device_id": "system", 00:21:00.686 "dma_device_type": 1 00:21:00.686 }, 00:21:00.686 { 00:21:00.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.686 "dma_device_type": 2 00:21:00.686 } 00:21:00.686 ], 00:21:00.686 "driver_specific": {} 00:21:00.686 }' 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:00.686 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:00.945 [2024-05-16 07:33:54.474067] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.945 [2024-05-16 07:33:54.474099] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.945 [2024-05-16 07:33:54.474113] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.945 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.511 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.511 "name": "Existed_Raid", 00:21:01.511 "uuid": "a2fb3549-1356-11ef-8e8f-9dd684e56d79", 00:21:01.511 "strip_size_kb": 64, 00:21:01.511 "state": "offline", 00:21:01.511 "raid_level": "raid0", 00:21:01.511 "superblock": true, 00:21:01.511 "num_base_bdevs": 3, 00:21:01.511 "num_base_bdevs_discovered": 2, 00:21:01.511 "num_base_bdevs_operational": 2, 00:21:01.511 "base_bdevs_list": [ 00:21:01.511 { 00:21:01.511 "name": null, 00:21:01.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.511 "is_configured": false, 00:21:01.511 "data_offset": 2048, 00:21:01.511 "data_size": 63488 00:21:01.511 }, 00:21:01.511 { 00:21:01.511 "name": "BaseBdev2", 00:21:01.511 "uuid": "a36f2b43-1356-11ef-8e8f-9dd684e56d79", 00:21:01.511 "is_configured": true, 00:21:01.511 "data_offset": 2048, 00:21:01.511 "data_size": 63488 00:21:01.511 }, 00:21:01.511 { 00:21:01.511 "name": "BaseBdev3", 00:21:01.511 "uuid": "a45719c7-1356-11ef-8e8f-9dd684e56d79", 00:21:01.511 "is_configured": true, 00:21:01.511 "data_offset": 2048, 00:21:01.511 "data_size": 63488 00:21:01.511 } 00:21:01.511 ] 00:21:01.511 }' 00:21:01.511 07:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.511 07:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.769 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:01.769 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:01.769 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.769 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:02.025 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:02.025 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:02.025 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:02.281 [2024-05-16 07:33:55.658905] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:02.281 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:02.281 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:02.281 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:02.281 07:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.538 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:02.538 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:02.538 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:03.304 [2024-05-16 07:33:56.415736] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:03.304 [2024-05-16 07:33:56.415776] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d27ca00 name Existed_Raid, state offline 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:03.304 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:03.579 BaseBdev2 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:03.579 07:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.838 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:04.098 [ 00:21:04.098 { 00:21:04.098 "name": "BaseBdev2", 00:21:04.098 "aliases": [ 00:21:04.098 "a7718a0e-1356-11ef-8e8f-9dd684e56d79" 00:21:04.098 ], 00:21:04.098 "product_name": "Malloc disk", 00:21:04.098 "block_size": 512, 00:21:04.098 "num_blocks": 65536, 00:21:04.098 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:04.098 "assigned_rate_limits": { 00:21:04.098 "rw_ios_per_sec": 0, 00:21:04.098 "rw_mbytes_per_sec": 0, 00:21:04.098 "r_mbytes_per_sec": 0, 00:21:04.098 "w_mbytes_per_sec": 0 00:21:04.098 }, 00:21:04.098 "claimed": false, 00:21:04.098 "zoned": false, 00:21:04.098 "supported_io_types": { 00:21:04.098 "read": true, 00:21:04.098 "write": true, 00:21:04.098 "unmap": true, 00:21:04.098 "write_zeroes": true, 00:21:04.098 "flush": true, 00:21:04.098 "reset": true, 00:21:04.098 "compare": false, 00:21:04.098 "compare_and_write": false, 00:21:04.098 "abort": true, 00:21:04.098 "nvme_admin": false, 00:21:04.098 "nvme_io": false 00:21:04.098 }, 00:21:04.098 "memory_domains": [ 00:21:04.098 { 00:21:04.098 "dma_device_id": "system", 00:21:04.098 "dma_device_type": 1 00:21:04.098 }, 00:21:04.098 { 00:21:04.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.098 "dma_device_type": 2 00:21:04.098 } 00:21:04.098 ], 00:21:04.098 "driver_specific": {} 00:21:04.098 } 00:21:04.098 ] 00:21:04.098 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:04.098 07:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:04.098 07:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:04.098 07:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:04.356 BaseBdev3 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:04.356 07:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.614 07:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:04.872 [ 00:21:04.872 { 00:21:04.872 "name": "BaseBdev3", 00:21:04.872 "aliases": [ 00:21:04.872 "a7f38acb-1356-11ef-8e8f-9dd684e56d79" 00:21:04.872 ], 00:21:04.872 "product_name": "Malloc disk", 00:21:04.872 "block_size": 512, 00:21:04.872 "num_blocks": 65536, 00:21:04.872 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:04.872 "assigned_rate_limits": { 00:21:04.872 "rw_ios_per_sec": 0, 00:21:04.872 "rw_mbytes_per_sec": 0, 00:21:04.872 "r_mbytes_per_sec": 0, 00:21:04.872 "w_mbytes_per_sec": 0 00:21:04.872 }, 00:21:04.872 "claimed": false, 00:21:04.872 "zoned": false, 00:21:04.872 "supported_io_types": { 00:21:04.872 "read": true, 00:21:04.872 "write": true, 00:21:04.872 "unmap": true, 00:21:04.872 "write_zeroes": true, 00:21:04.872 "flush": true, 00:21:04.872 "reset": true, 00:21:04.872 "compare": false, 00:21:04.872 "compare_and_write": false, 00:21:04.872 "abort": true, 00:21:04.872 "nvme_admin": false, 00:21:04.872 "nvme_io": false 00:21:04.872 }, 00:21:04.872 "memory_domains": [ 00:21:04.872 { 00:21:04.872 "dma_device_id": "system", 00:21:04.872 "dma_device_type": 1 00:21:04.872 }, 00:21:04.872 { 00:21:04.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.872 "dma_device_type": 2 00:21:04.872 } 00:21:04.872 ], 00:21:04.872 "driver_specific": {} 00:21:04.872 } 00:21:04.872 ] 00:21:04.872 07:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:04.872 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:04.872 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:04.872 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:05.131 [2024-05-16 07:33:58.540599] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.131 [2024-05-16 07:33:58.540673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.131 [2024-05-16 07:33:58.540681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.131 [2024-05-16 07:33:58.541114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.131 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.390 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.390 "name": "Existed_Raid", 00:21:05.390 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:05.390 "strip_size_kb": 64, 00:21:05.390 "state": "configuring", 00:21:05.390 "raid_level": "raid0", 00:21:05.390 "superblock": true, 00:21:05.390 "num_base_bdevs": 3, 00:21:05.390 "num_base_bdevs_discovered": 2, 00:21:05.390 "num_base_bdevs_operational": 3, 00:21:05.390 "base_bdevs_list": [ 00:21:05.390 { 00:21:05.390 "name": "BaseBdev1", 00:21:05.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.390 "is_configured": false, 00:21:05.390 "data_offset": 0, 00:21:05.390 "data_size": 0 00:21:05.390 }, 00:21:05.390 { 00:21:05.390 "name": "BaseBdev2", 00:21:05.390 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:05.390 "is_configured": true, 00:21:05.390 "data_offset": 2048, 00:21:05.390 "data_size": 63488 00:21:05.390 }, 00:21:05.390 { 00:21:05.390 "name": "BaseBdev3", 00:21:05.390 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:05.390 "is_configured": true, 00:21:05.390 "data_offset": 2048, 00:21:05.390 "data_size": 63488 00:21:05.390 } 00:21:05.390 ] 00:21:05.390 }' 00:21:05.390 07:33:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.390 07:33:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.648 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:05.962 [2024-05-16 07:33:59.376613] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.962 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.220 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.220 "name": "Existed_Raid", 00:21:06.220 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:06.220 "strip_size_kb": 64, 00:21:06.220 "state": "configuring", 00:21:06.220 "raid_level": "raid0", 00:21:06.220 "superblock": true, 00:21:06.220 "num_base_bdevs": 3, 00:21:06.220 "num_base_bdevs_discovered": 1, 00:21:06.220 "num_base_bdevs_operational": 3, 00:21:06.220 "base_bdevs_list": [ 00:21:06.220 { 00:21:06.220 "name": "BaseBdev1", 00:21:06.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.220 "is_configured": false, 00:21:06.220 "data_offset": 0, 00:21:06.220 "data_size": 0 00:21:06.220 }, 00:21:06.220 { 00:21:06.220 "name": null, 00:21:06.220 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:06.220 "is_configured": false, 00:21:06.220 "data_offset": 2048, 00:21:06.220 "data_size": 63488 00:21:06.220 }, 00:21:06.220 { 00:21:06.220 "name": "BaseBdev3", 00:21:06.220 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:06.220 "is_configured": true, 00:21:06.220 "data_offset": 2048, 00:21:06.220 "data_size": 63488 00:21:06.220 } 00:21:06.220 ] 00:21:06.220 }' 00:21:06.220 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.220 07:33:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.479 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.479 07:33:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:06.737 07:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:21:06.737 07:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:06.995 [2024-05-16 07:34:00.524722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.996 BaseBdev1 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:06.996 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.562 07:34:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:07.562 [ 00:21:07.562 { 00:21:07.562 "name": "BaseBdev1", 00:21:07.562 "aliases": [ 00:21:07.562 "a998b12f-1356-11ef-8e8f-9dd684e56d79" 00:21:07.562 ], 00:21:07.562 "product_name": "Malloc disk", 00:21:07.562 "block_size": 512, 00:21:07.562 "num_blocks": 65536, 00:21:07.562 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:07.562 "assigned_rate_limits": { 00:21:07.562 "rw_ios_per_sec": 0, 00:21:07.562 "rw_mbytes_per_sec": 0, 00:21:07.562 "r_mbytes_per_sec": 0, 00:21:07.562 "w_mbytes_per_sec": 0 00:21:07.562 }, 00:21:07.562 "claimed": true, 00:21:07.562 "claim_type": "exclusive_write", 00:21:07.562 "zoned": false, 00:21:07.563 "supported_io_types": { 00:21:07.563 "read": true, 00:21:07.563 "write": true, 00:21:07.563 "unmap": true, 00:21:07.563 "write_zeroes": true, 00:21:07.563 "flush": true, 00:21:07.563 "reset": true, 00:21:07.563 "compare": false, 00:21:07.563 "compare_and_write": false, 00:21:07.563 "abort": true, 00:21:07.563 "nvme_admin": false, 00:21:07.563 "nvme_io": false 00:21:07.563 }, 00:21:07.563 "memory_domains": [ 00:21:07.563 { 00:21:07.563 "dma_device_id": "system", 00:21:07.563 "dma_device_type": 1 00:21:07.563 }, 00:21:07.563 { 00:21:07.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.563 "dma_device_type": 2 00:21:07.563 } 00:21:07.563 ], 00:21:07.563 "driver_specific": {} 00:21:07.563 } 00:21:07.563 ] 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.563 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.129 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.129 "name": "Existed_Raid", 00:21:08.129 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:08.129 "strip_size_kb": 64, 00:21:08.130 "state": "configuring", 00:21:08.130 "raid_level": "raid0", 00:21:08.130 "superblock": true, 00:21:08.130 "num_base_bdevs": 3, 00:21:08.130 "num_base_bdevs_discovered": 2, 00:21:08.130 "num_base_bdevs_operational": 3, 00:21:08.130 "base_bdevs_list": [ 00:21:08.130 { 00:21:08.130 "name": "BaseBdev1", 00:21:08.130 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:08.130 "is_configured": true, 00:21:08.130 "data_offset": 2048, 00:21:08.130 "data_size": 63488 00:21:08.130 }, 00:21:08.130 { 00:21:08.130 "name": null, 00:21:08.130 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:08.130 "is_configured": false, 00:21:08.130 "data_offset": 2048, 00:21:08.130 "data_size": 63488 00:21:08.130 }, 00:21:08.130 { 00:21:08.130 "name": "BaseBdev3", 00:21:08.130 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:08.130 "is_configured": true, 00:21:08.130 "data_offset": 2048, 00:21:08.130 "data_size": 63488 00:21:08.130 } 00:21:08.130 ] 00:21:08.130 }' 00:21:08.130 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.130 07:34:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.387 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.387 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:08.670 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:08.670 07:34:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:08.670 [2024-05-16 07:34:02.160648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.670 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.947 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.947 "name": "Existed_Raid", 00:21:08.947 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:08.947 "strip_size_kb": 64, 00:21:08.947 "state": "configuring", 00:21:08.947 "raid_level": "raid0", 00:21:08.947 "superblock": true, 00:21:08.947 "num_base_bdevs": 3, 00:21:08.947 "num_base_bdevs_discovered": 1, 00:21:08.947 "num_base_bdevs_operational": 3, 00:21:08.947 "base_bdevs_list": [ 00:21:08.947 { 00:21:08.947 "name": "BaseBdev1", 00:21:08.947 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:08.947 "is_configured": true, 00:21:08.947 "data_offset": 2048, 00:21:08.947 "data_size": 63488 00:21:08.948 }, 00:21:08.948 { 00:21:08.948 "name": null, 00:21:08.948 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:08.948 "is_configured": false, 00:21:08.948 "data_offset": 2048, 00:21:08.948 "data_size": 63488 00:21:08.948 }, 00:21:08.948 { 00:21:08.948 "name": null, 00:21:08.948 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:08.948 "is_configured": false, 00:21:08.948 "data_offset": 2048, 00:21:08.948 "data_size": 63488 00:21:08.948 } 00:21:08.948 ] 00:21:08.948 }' 00:21:08.948 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.948 07:34:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.205 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.205 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:09.464 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:21:09.464 07:34:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:09.722 [2024-05-16 07:34:03.236702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.722 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.980 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.980 "name": "Existed_Raid", 00:21:09.980 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:09.980 "strip_size_kb": 64, 00:21:09.980 "state": "configuring", 00:21:09.980 "raid_level": "raid0", 00:21:09.980 "superblock": true, 00:21:09.980 "num_base_bdevs": 3, 00:21:09.980 "num_base_bdevs_discovered": 2, 00:21:09.980 "num_base_bdevs_operational": 3, 00:21:09.980 "base_bdevs_list": [ 00:21:09.980 { 00:21:09.980 "name": "BaseBdev1", 00:21:09.980 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:09.980 "is_configured": true, 00:21:09.980 "data_offset": 2048, 00:21:09.980 "data_size": 63488 00:21:09.980 }, 00:21:09.980 { 00:21:09.980 "name": null, 00:21:09.980 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:09.980 "is_configured": false, 00:21:09.980 "data_offset": 2048, 00:21:09.980 "data_size": 63488 00:21:09.980 }, 00:21:09.980 { 00:21:09.980 "name": "BaseBdev3", 00:21:09.980 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:09.980 "is_configured": true, 00:21:09.980 "data_offset": 2048, 00:21:09.980 "data_size": 63488 00:21:09.980 } 00:21:09.980 ] 00:21:09.980 }' 00:21:09.980 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.980 07:34:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.547 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.547 07:34:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:10.547 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:21:10.547 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:10.805 [2024-05-16 07:34:04.244687] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.805 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.133 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.133 "name": "Existed_Raid", 00:21:11.133 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:11.133 "strip_size_kb": 64, 00:21:11.133 "state": "configuring", 00:21:11.133 "raid_level": "raid0", 00:21:11.133 "superblock": true, 00:21:11.133 "num_base_bdevs": 3, 00:21:11.133 "num_base_bdevs_discovered": 1, 00:21:11.133 "num_base_bdevs_operational": 3, 00:21:11.133 "base_bdevs_list": [ 00:21:11.133 { 00:21:11.133 "name": null, 00:21:11.133 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:11.133 "is_configured": false, 00:21:11.133 "data_offset": 2048, 00:21:11.133 "data_size": 63488 00:21:11.133 }, 00:21:11.133 { 00:21:11.133 "name": null, 00:21:11.133 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:11.133 "is_configured": false, 00:21:11.133 "data_offset": 2048, 00:21:11.133 "data_size": 63488 00:21:11.133 }, 00:21:11.133 { 00:21:11.133 "name": "BaseBdev3", 00:21:11.133 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:11.133 "is_configured": true, 00:21:11.133 "data_offset": 2048, 00:21:11.133 "data_size": 63488 00:21:11.133 } 00:21:11.133 ] 00:21:11.133 }' 00:21:11.133 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.133 07:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.392 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.392 07:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:11.651 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:21:11.651 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:11.911 [2024-05-16 07:34:05.309485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.911 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.169 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:12.169 "name": "Existed_Raid", 00:21:12.169 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:12.169 "strip_size_kb": 64, 00:21:12.169 "state": "configuring", 00:21:12.169 "raid_level": "raid0", 00:21:12.169 "superblock": true, 00:21:12.169 "num_base_bdevs": 3, 00:21:12.169 "num_base_bdevs_discovered": 2, 00:21:12.169 "num_base_bdevs_operational": 3, 00:21:12.169 "base_bdevs_list": [ 00:21:12.169 { 00:21:12.169 "name": null, 00:21:12.169 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:12.169 "is_configured": false, 00:21:12.169 "data_offset": 2048, 00:21:12.169 "data_size": 63488 00:21:12.169 }, 00:21:12.169 { 00:21:12.169 "name": "BaseBdev2", 00:21:12.169 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:12.169 "is_configured": true, 00:21:12.169 "data_offset": 2048, 00:21:12.169 "data_size": 63488 00:21:12.169 }, 00:21:12.169 { 00:21:12.169 "name": "BaseBdev3", 00:21:12.169 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:12.169 "is_configured": true, 00:21:12.169 "data_offset": 2048, 00:21:12.169 "data_size": 63488 00:21:12.169 } 00:21:12.169 ] 00:21:12.169 }' 00:21:12.169 07:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:12.169 07:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.735 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.735 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:12.993 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:21:12.993 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.993 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:13.251 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a998b12f-1356-11ef-8e8f-9dd684e56d79 00:21:13.509 [2024-05-16 07:34:06.969683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:13.509 [2024-05-16 07:34:06.969756] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d27ca00 00:21:13.509 [2024-05-16 07:34:06.969780] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:13.509 [2024-05-16 07:34:06.969826] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d2dfe20 00:21:13.509 [2024-05-16 07:34:06.969887] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d27ca00 00:21:13.509 [2024-05-16 07:34:06.969894] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d27ca00 00:21:13.509 [2024-05-16 07:34:06.969925] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.509 NewBaseBdev 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:13.509 07:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:13.767 07:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:14.024 [ 00:21:14.024 { 00:21:14.024 "name": "NewBaseBdev", 00:21:14.024 "aliases": [ 00:21:14.024 "a998b12f-1356-11ef-8e8f-9dd684e56d79" 00:21:14.024 ], 00:21:14.024 "product_name": "Malloc disk", 00:21:14.024 "block_size": 512, 00:21:14.024 "num_blocks": 65536, 00:21:14.024 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:14.024 "assigned_rate_limits": { 00:21:14.025 "rw_ios_per_sec": 0, 00:21:14.025 "rw_mbytes_per_sec": 0, 00:21:14.025 "r_mbytes_per_sec": 0, 00:21:14.025 "w_mbytes_per_sec": 0 00:21:14.025 }, 00:21:14.025 "claimed": true, 00:21:14.025 "claim_type": "exclusive_write", 00:21:14.025 "zoned": false, 00:21:14.025 "supported_io_types": { 00:21:14.025 "read": true, 00:21:14.025 "write": true, 00:21:14.025 "unmap": true, 00:21:14.025 "write_zeroes": true, 00:21:14.025 "flush": true, 00:21:14.025 "reset": true, 00:21:14.025 "compare": false, 00:21:14.025 "compare_and_write": false, 00:21:14.025 "abort": true, 00:21:14.025 "nvme_admin": false, 00:21:14.025 "nvme_io": false 00:21:14.025 }, 00:21:14.025 "memory_domains": [ 00:21:14.025 { 00:21:14.025 "dma_device_id": "system", 00:21:14.025 "dma_device_type": 1 00:21:14.025 }, 00:21:14.025 { 00:21:14.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.025 "dma_device_type": 2 00:21:14.025 } 00:21:14.025 ], 00:21:14.025 "driver_specific": {} 00:21:14.025 } 00:21:14.025 ] 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.025 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.283 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.283 "name": "Existed_Raid", 00:21:14.283 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:14.283 "strip_size_kb": 64, 00:21:14.283 "state": "online", 00:21:14.283 "raid_level": "raid0", 00:21:14.283 "superblock": true, 00:21:14.283 "num_base_bdevs": 3, 00:21:14.283 "num_base_bdevs_discovered": 3, 00:21:14.283 "num_base_bdevs_operational": 3, 00:21:14.283 "base_bdevs_list": [ 00:21:14.283 { 00:21:14.283 "name": "NewBaseBdev", 00:21:14.283 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:14.283 "is_configured": true, 00:21:14.283 "data_offset": 2048, 00:21:14.283 "data_size": 63488 00:21:14.283 }, 00:21:14.283 { 00:21:14.283 "name": "BaseBdev2", 00:21:14.283 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:14.283 "is_configured": true, 00:21:14.283 "data_offset": 2048, 00:21:14.283 "data_size": 63488 00:21:14.283 }, 00:21:14.283 { 00:21:14.283 "name": "BaseBdev3", 00:21:14.283 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:14.283 "is_configured": true, 00:21:14.283 "data_offset": 2048, 00:21:14.283 "data_size": 63488 00:21:14.283 } 00:21:14.283 ] 00:21:14.283 }' 00:21:14.283 07:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.283 07:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:14.849 [2024-05-16 07:34:08.353536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.849 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:14.849 "name": "Existed_Raid", 00:21:14.849 "aliases": [ 00:21:14.849 "a869f3f1-1356-11ef-8e8f-9dd684e56d79" 00:21:14.849 ], 00:21:14.849 "product_name": "Raid Volume", 00:21:14.849 "block_size": 512, 00:21:14.849 "num_blocks": 190464, 00:21:14.849 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:14.849 "assigned_rate_limits": { 00:21:14.849 "rw_ios_per_sec": 0, 00:21:14.849 "rw_mbytes_per_sec": 0, 00:21:14.849 "r_mbytes_per_sec": 0, 00:21:14.849 "w_mbytes_per_sec": 0 00:21:14.849 }, 00:21:14.849 "claimed": false, 00:21:14.849 "zoned": false, 00:21:14.849 "supported_io_types": { 00:21:14.849 "read": true, 00:21:14.849 "write": true, 00:21:14.849 "unmap": true, 00:21:14.849 "write_zeroes": true, 00:21:14.849 "flush": true, 00:21:14.849 "reset": true, 00:21:14.849 "compare": false, 00:21:14.849 "compare_and_write": false, 00:21:14.849 "abort": false, 00:21:14.849 "nvme_admin": false, 00:21:14.849 "nvme_io": false 00:21:14.849 }, 00:21:14.849 "memory_domains": [ 00:21:14.849 { 00:21:14.849 "dma_device_id": "system", 00:21:14.849 "dma_device_type": 1 00:21:14.849 }, 00:21:14.849 { 00:21:14.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.849 "dma_device_type": 2 00:21:14.849 }, 00:21:14.849 { 00:21:14.849 "dma_device_id": "system", 00:21:14.849 "dma_device_type": 1 00:21:14.849 }, 00:21:14.849 { 00:21:14.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.849 "dma_device_type": 2 00:21:14.849 }, 00:21:14.849 { 00:21:14.849 "dma_device_id": "system", 00:21:14.849 "dma_device_type": 1 00:21:14.849 }, 00:21:14.849 { 00:21:14.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.850 "dma_device_type": 2 00:21:14.850 } 00:21:14.850 ], 00:21:14.850 "driver_specific": { 00:21:14.850 "raid": { 00:21:14.850 "uuid": "a869f3f1-1356-11ef-8e8f-9dd684e56d79", 00:21:14.850 "strip_size_kb": 64, 00:21:14.850 "state": "online", 00:21:14.850 "raid_level": "raid0", 00:21:14.850 "superblock": true, 00:21:14.850 "num_base_bdevs": 3, 00:21:14.850 "num_base_bdevs_discovered": 3, 00:21:14.850 "num_base_bdevs_operational": 3, 00:21:14.850 "base_bdevs_list": [ 00:21:14.850 { 00:21:14.850 "name": "NewBaseBdev", 00:21:14.850 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:14.850 "is_configured": true, 00:21:14.850 "data_offset": 2048, 00:21:14.850 "data_size": 63488 00:21:14.850 }, 00:21:14.850 { 00:21:14.850 "name": "BaseBdev2", 00:21:14.850 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:14.850 "is_configured": true, 00:21:14.850 "data_offset": 2048, 00:21:14.850 "data_size": 63488 00:21:14.850 }, 00:21:14.850 { 00:21:14.850 "name": "BaseBdev3", 00:21:14.850 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:14.850 "is_configured": true, 00:21:14.850 "data_offset": 2048, 00:21:14.850 "data_size": 63488 00:21:14.850 } 00:21:14.850 ] 00:21:14.850 } 00:21:14.850 } 00:21:14.850 }' 00:21:14.850 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.850 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:21:14.850 BaseBdev2 00:21:14.850 BaseBdev3' 00:21:14.850 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:14.850 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:14.850 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:15.108 "name": "NewBaseBdev", 00:21:15.108 "aliases": [ 00:21:15.108 "a998b12f-1356-11ef-8e8f-9dd684e56d79" 00:21:15.108 ], 00:21:15.108 "product_name": "Malloc disk", 00:21:15.108 "block_size": 512, 00:21:15.108 "num_blocks": 65536, 00:21:15.108 "uuid": "a998b12f-1356-11ef-8e8f-9dd684e56d79", 00:21:15.108 "assigned_rate_limits": { 00:21:15.108 "rw_ios_per_sec": 0, 00:21:15.108 "rw_mbytes_per_sec": 0, 00:21:15.108 "r_mbytes_per_sec": 0, 00:21:15.108 "w_mbytes_per_sec": 0 00:21:15.108 }, 00:21:15.108 "claimed": true, 00:21:15.108 "claim_type": "exclusive_write", 00:21:15.108 "zoned": false, 00:21:15.108 "supported_io_types": { 00:21:15.108 "read": true, 00:21:15.108 "write": true, 00:21:15.108 "unmap": true, 00:21:15.108 "write_zeroes": true, 00:21:15.108 "flush": true, 00:21:15.108 "reset": true, 00:21:15.108 "compare": false, 00:21:15.108 "compare_and_write": false, 00:21:15.108 "abort": true, 00:21:15.108 "nvme_admin": false, 00:21:15.108 "nvme_io": false 00:21:15.108 }, 00:21:15.108 "memory_domains": [ 00:21:15.108 { 00:21:15.108 "dma_device_id": "system", 00:21:15.108 "dma_device_type": 1 00:21:15.108 }, 00:21:15.108 { 00:21:15.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.108 "dma_device_type": 2 00:21:15.108 } 00:21:15.108 ], 00:21:15.108 "driver_specific": {} 00:21:15.108 }' 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:15.108 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:15.367 "name": "BaseBdev2", 00:21:15.367 "aliases": [ 00:21:15.367 "a7718a0e-1356-11ef-8e8f-9dd684e56d79" 00:21:15.367 ], 00:21:15.367 "product_name": "Malloc disk", 00:21:15.367 "block_size": 512, 00:21:15.367 "num_blocks": 65536, 00:21:15.367 "uuid": "a7718a0e-1356-11ef-8e8f-9dd684e56d79", 00:21:15.367 "assigned_rate_limits": { 00:21:15.367 "rw_ios_per_sec": 0, 00:21:15.367 "rw_mbytes_per_sec": 0, 00:21:15.367 "r_mbytes_per_sec": 0, 00:21:15.367 "w_mbytes_per_sec": 0 00:21:15.367 }, 00:21:15.367 "claimed": true, 00:21:15.367 "claim_type": "exclusive_write", 00:21:15.367 "zoned": false, 00:21:15.367 "supported_io_types": { 00:21:15.367 "read": true, 00:21:15.367 "write": true, 00:21:15.367 "unmap": true, 00:21:15.367 "write_zeroes": true, 00:21:15.367 "flush": true, 00:21:15.367 "reset": true, 00:21:15.367 "compare": false, 00:21:15.367 "compare_and_write": false, 00:21:15.367 "abort": true, 00:21:15.367 "nvme_admin": false, 00:21:15.367 "nvme_io": false 00:21:15.367 }, 00:21:15.367 "memory_domains": [ 00:21:15.367 { 00:21:15.367 "dma_device_id": "system", 00:21:15.367 "dma_device_type": 1 00:21:15.367 }, 00:21:15.367 { 00:21:15.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.367 "dma_device_type": 2 00:21:15.367 } 00:21:15.367 ], 00:21:15.367 "driver_specific": {} 00:21:15.367 }' 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:15.367 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:15.626 07:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:15.884 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:15.884 "name": "BaseBdev3", 00:21:15.885 "aliases": [ 00:21:15.885 "a7f38acb-1356-11ef-8e8f-9dd684e56d79" 00:21:15.885 ], 00:21:15.885 "product_name": "Malloc disk", 00:21:15.885 "block_size": 512, 00:21:15.885 "num_blocks": 65536, 00:21:15.885 "uuid": "a7f38acb-1356-11ef-8e8f-9dd684e56d79", 00:21:15.885 "assigned_rate_limits": { 00:21:15.885 "rw_ios_per_sec": 0, 00:21:15.885 "rw_mbytes_per_sec": 0, 00:21:15.885 "r_mbytes_per_sec": 0, 00:21:15.885 "w_mbytes_per_sec": 0 00:21:15.885 }, 00:21:15.885 "claimed": true, 00:21:15.885 "claim_type": "exclusive_write", 00:21:15.885 "zoned": false, 00:21:15.885 "supported_io_types": { 00:21:15.885 "read": true, 00:21:15.885 "write": true, 00:21:15.885 "unmap": true, 00:21:15.885 "write_zeroes": true, 00:21:15.885 "flush": true, 00:21:15.885 "reset": true, 00:21:15.885 "compare": false, 00:21:15.885 "compare_and_write": false, 00:21:15.885 "abort": true, 00:21:15.885 "nvme_admin": false, 00:21:15.885 "nvme_io": false 00:21:15.885 }, 00:21:15.885 "memory_domains": [ 00:21:15.885 { 00:21:15.885 "dma_device_id": "system", 00:21:15.885 "dma_device_type": 1 00:21:15.885 }, 00:21:15.885 { 00:21:15.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.885 "dma_device_type": 2 00:21:15.885 } 00:21:15.885 ], 00:21:15.885 "driver_specific": {} 00:21:15.885 }' 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:15.885 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:16.143 [2024-05-16 07:34:09.505538] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:16.143 [2024-05-16 07:34:09.505563] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.143 [2024-05-16 07:34:09.505580] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.143 [2024-05-16 07:34:09.505592] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.143 [2024-05-16 07:34:09.505596] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d27ca00 name Existed_Raid, state offline 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 52770 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 52770 ']' 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 52770 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 52770 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:16.143 killing process with pid 52770 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 52770' 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 52770 00:21:16.143 [2024-05-16 07:34:09.531716] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:16.143 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 52770 00:21:16.143 [2024-05-16 07:34:09.545990] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:16.402 07:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:21:16.402 ************************************ 00:21:16.402 END TEST raid_state_function_test_sb 00:21:16.402 ************************************ 00:21:16.402 00:21:16.402 real 0m24.593s 00:21:16.402 user 0m45.232s 00:21:16.402 sys 0m3.175s 00:21:16.402 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:16.402 07:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.402 07:34:09 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:21:16.402 07:34:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:16.402 07:34:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.402 07:34:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:16.402 ************************************ 00:21:16.402 START TEST raid_superblock_test 00:21:16.402 ************************************ 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=53498 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 53498 /var/tmp/spdk-raid.sock 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 53498 ']' 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:16.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:16.402 07:34:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.402 [2024-05-16 07:34:09.762139] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:16.402 [2024-05-16 07:34:09.762338] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:16.661 EAL: TSC is not safe to use in SMP mode 00:21:16.661 EAL: TSC is not invariant 00:21:16.661 [2024-05-16 07:34:10.214718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.919 [2024-05-16 07:34:10.297483] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:16.919 [2024-05-16 07:34:10.299600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.919 [2024-05-16 07:34:10.300311] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.919 [2024-05-16 07:34:10.300327] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:17.510 07:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:17.769 malloc1 00:21:17.769 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:18.028 [2024-05-16 07:34:11.351363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:18.028 [2024-05-16 07:34:11.351419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.028 [2024-05-16 07:34:11.352021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f6780 00:21:18.028 [2024-05-16 07:34:11.352050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.028 [2024-05-16 07:34:11.352766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.028 [2024-05-16 07:34:11.352796] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:18.028 pt1 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.028 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:18.287 malloc2 00:21:18.287 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:18.546 [2024-05-16 07:34:11.935379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:18.546 [2024-05-16 07:34:11.935447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.546 [2024-05-16 07:34:11.935475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f6c80 00:21:18.546 [2024-05-16 07:34:11.935483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.546 [2024-05-16 07:34:11.935966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.546 [2024-05-16 07:34:11.935989] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:18.546 pt2 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.546 07:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:18.805 malloc3 00:21:18.805 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:19.064 [2024-05-16 07:34:12.427376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:19.064 [2024-05-16 07:34:12.427449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.064 [2024-05-16 07:34:12.427473] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f7180 00:21:19.064 [2024-05-16 07:34:12.427481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.064 [2024-05-16 07:34:12.427990] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.064 [2024-05-16 07:34:12.428014] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:19.064 pt3 00:21:19.064 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:19.064 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:19.064 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:19.324 [2024-05-16 07:34:12.691389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.324 [2024-05-16 07:34:12.691810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.324 [2024-05-16 07:34:12.691823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:19.324 [2024-05-16 07:34:12.691882] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c8f7400 00:21:19.324 [2024-05-16 07:34:12.691887] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:19.324 [2024-05-16 07:34:12.691916] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c959e20 00:21:19.324 [2024-05-16 07:34:12.691969] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c8f7400 00:21:19.324 [2024-05-16 07:34:12.691973] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c8f7400 00:21:19.324 [2024-05-16 07:34:12.691992] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.324 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.582 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.582 "name": "raid_bdev1", 00:21:19.582 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:19.582 "strip_size_kb": 64, 00:21:19.582 "state": "online", 00:21:19.582 "raid_level": "raid0", 00:21:19.582 "superblock": true, 00:21:19.582 "num_base_bdevs": 3, 00:21:19.582 "num_base_bdevs_discovered": 3, 00:21:19.582 "num_base_bdevs_operational": 3, 00:21:19.582 "base_bdevs_list": [ 00:21:19.582 { 00:21:19.582 "name": "pt1", 00:21:19.582 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:19.582 "is_configured": true, 00:21:19.582 "data_offset": 2048, 00:21:19.582 "data_size": 63488 00:21:19.582 }, 00:21:19.582 { 00:21:19.582 "name": "pt2", 00:21:19.582 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:19.582 "is_configured": true, 00:21:19.582 "data_offset": 2048, 00:21:19.582 "data_size": 63488 00:21:19.582 }, 00:21:19.582 { 00:21:19.582 "name": "pt3", 00:21:19.582 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:19.582 "is_configured": true, 00:21:19.582 "data_offset": 2048, 00:21:19.582 "data_size": 63488 00:21:19.582 } 00:21:19.582 ] 00:21:19.582 }' 00:21:19.582 07:34:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.582 07:34:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:20.149 [2024-05-16 07:34:13.679434] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:20.149 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:20.149 "name": "raid_bdev1", 00:21:20.149 "aliases": [ 00:21:20.149 "b0d9312b-1356-11ef-8e8f-9dd684e56d79" 00:21:20.149 ], 00:21:20.149 "product_name": "Raid Volume", 00:21:20.150 "block_size": 512, 00:21:20.150 "num_blocks": 190464, 00:21:20.150 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:20.150 "assigned_rate_limits": { 00:21:20.150 "rw_ios_per_sec": 0, 00:21:20.150 "rw_mbytes_per_sec": 0, 00:21:20.150 "r_mbytes_per_sec": 0, 00:21:20.150 "w_mbytes_per_sec": 0 00:21:20.150 }, 00:21:20.150 "claimed": false, 00:21:20.150 "zoned": false, 00:21:20.150 "supported_io_types": { 00:21:20.150 "read": true, 00:21:20.150 "write": true, 00:21:20.150 "unmap": true, 00:21:20.150 "write_zeroes": true, 00:21:20.150 "flush": true, 00:21:20.150 "reset": true, 00:21:20.150 "compare": false, 00:21:20.150 "compare_and_write": false, 00:21:20.150 "abort": false, 00:21:20.150 "nvme_admin": false, 00:21:20.150 "nvme_io": false 00:21:20.150 }, 00:21:20.150 "memory_domains": [ 00:21:20.150 { 00:21:20.150 "dma_device_id": "system", 00:21:20.150 "dma_device_type": 1 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.150 "dma_device_type": 2 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "dma_device_id": "system", 00:21:20.150 "dma_device_type": 1 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.150 "dma_device_type": 2 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "dma_device_id": "system", 00:21:20.150 "dma_device_type": 1 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.150 "dma_device_type": 2 00:21:20.150 } 00:21:20.150 ], 00:21:20.150 "driver_specific": { 00:21:20.150 "raid": { 00:21:20.150 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:20.150 "strip_size_kb": 64, 00:21:20.150 "state": "online", 00:21:20.150 "raid_level": "raid0", 00:21:20.150 "superblock": true, 00:21:20.150 "num_base_bdevs": 3, 00:21:20.150 "num_base_bdevs_discovered": 3, 00:21:20.150 "num_base_bdevs_operational": 3, 00:21:20.150 "base_bdevs_list": [ 00:21:20.150 { 00:21:20.150 "name": "pt1", 00:21:20.150 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:20.150 "is_configured": true, 00:21:20.150 "data_offset": 2048, 00:21:20.150 "data_size": 63488 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "name": "pt2", 00:21:20.150 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:20.150 "is_configured": true, 00:21:20.150 "data_offset": 2048, 00:21:20.150 "data_size": 63488 00:21:20.150 }, 00:21:20.150 { 00:21:20.150 "name": "pt3", 00:21:20.150 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:20.150 "is_configured": true, 00:21:20.150 "data_offset": 2048, 00:21:20.150 "data_size": 63488 00:21:20.150 } 00:21:20.150 ] 00:21:20.150 } 00:21:20.150 } 00:21:20.150 }' 00:21:20.150 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:20.409 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:20.409 pt2 00:21:20.409 pt3' 00:21:20.409 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:20.409 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:20.409 07:34:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:20.667 "name": "pt1", 00:21:20.667 "aliases": [ 00:21:20.667 "ab5208db-dacb-3951-9888-bbdad2f98ff1" 00:21:20.667 ], 00:21:20.667 "product_name": "passthru", 00:21:20.667 "block_size": 512, 00:21:20.667 "num_blocks": 65536, 00:21:20.667 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:20.667 "assigned_rate_limits": { 00:21:20.667 "rw_ios_per_sec": 0, 00:21:20.667 "rw_mbytes_per_sec": 0, 00:21:20.667 "r_mbytes_per_sec": 0, 00:21:20.667 "w_mbytes_per_sec": 0 00:21:20.667 }, 00:21:20.667 "claimed": true, 00:21:20.667 "claim_type": "exclusive_write", 00:21:20.667 "zoned": false, 00:21:20.667 "supported_io_types": { 00:21:20.667 "read": true, 00:21:20.667 "write": true, 00:21:20.667 "unmap": true, 00:21:20.667 "write_zeroes": true, 00:21:20.667 "flush": true, 00:21:20.667 "reset": true, 00:21:20.667 "compare": false, 00:21:20.667 "compare_and_write": false, 00:21:20.667 "abort": true, 00:21:20.667 "nvme_admin": false, 00:21:20.667 "nvme_io": false 00:21:20.667 }, 00:21:20.667 "memory_domains": [ 00:21:20.667 { 00:21:20.667 "dma_device_id": "system", 00:21:20.667 "dma_device_type": 1 00:21:20.667 }, 00:21:20.667 { 00:21:20.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.667 "dma_device_type": 2 00:21:20.667 } 00:21:20.667 ], 00:21:20.667 "driver_specific": { 00:21:20.667 "passthru": { 00:21:20.667 "name": "pt1", 00:21:20.667 "base_bdev_name": "malloc1" 00:21:20.667 } 00:21:20.667 } 00:21:20.667 }' 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:20.667 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:20.926 "name": "pt2", 00:21:20.926 "aliases": [ 00:21:20.926 "0aee64e5-47b8-7552-a331-5186cc58f4d3" 00:21:20.926 ], 00:21:20.926 "product_name": "passthru", 00:21:20.926 "block_size": 512, 00:21:20.926 "num_blocks": 65536, 00:21:20.926 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:20.926 "assigned_rate_limits": { 00:21:20.926 "rw_ios_per_sec": 0, 00:21:20.926 "rw_mbytes_per_sec": 0, 00:21:20.926 "r_mbytes_per_sec": 0, 00:21:20.926 "w_mbytes_per_sec": 0 00:21:20.926 }, 00:21:20.926 "claimed": true, 00:21:20.926 "claim_type": "exclusive_write", 00:21:20.926 "zoned": false, 00:21:20.926 "supported_io_types": { 00:21:20.926 "read": true, 00:21:20.926 "write": true, 00:21:20.926 "unmap": true, 00:21:20.926 "write_zeroes": true, 00:21:20.926 "flush": true, 00:21:20.926 "reset": true, 00:21:20.926 "compare": false, 00:21:20.926 "compare_and_write": false, 00:21:20.926 "abort": true, 00:21:20.926 "nvme_admin": false, 00:21:20.926 "nvme_io": false 00:21:20.926 }, 00:21:20.926 "memory_domains": [ 00:21:20.926 { 00:21:20.926 "dma_device_id": "system", 00:21:20.926 "dma_device_type": 1 00:21:20.926 }, 00:21:20.926 { 00:21:20.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.926 "dma_device_type": 2 00:21:20.926 } 00:21:20.926 ], 00:21:20.926 "driver_specific": { 00:21:20.926 "passthru": { 00:21:20.926 "name": "pt2", 00:21:20.926 "base_bdev_name": "malloc2" 00:21:20.926 } 00:21:20.926 } 00:21:20.926 }' 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:20.926 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:21.186 "name": "pt3", 00:21:21.186 "aliases": [ 00:21:21.186 "fe6db704-898a-7f5d-b67f-eb1d420b2eb2" 00:21:21.186 ], 00:21:21.186 "product_name": "passthru", 00:21:21.186 "block_size": 512, 00:21:21.186 "num_blocks": 65536, 00:21:21.186 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:21.186 "assigned_rate_limits": { 00:21:21.186 "rw_ios_per_sec": 0, 00:21:21.186 "rw_mbytes_per_sec": 0, 00:21:21.186 "r_mbytes_per_sec": 0, 00:21:21.186 "w_mbytes_per_sec": 0 00:21:21.186 }, 00:21:21.186 "claimed": true, 00:21:21.186 "claim_type": "exclusive_write", 00:21:21.186 "zoned": false, 00:21:21.186 "supported_io_types": { 00:21:21.186 "read": true, 00:21:21.186 "write": true, 00:21:21.186 "unmap": true, 00:21:21.186 "write_zeroes": true, 00:21:21.186 "flush": true, 00:21:21.186 "reset": true, 00:21:21.186 "compare": false, 00:21:21.186 "compare_and_write": false, 00:21:21.186 "abort": true, 00:21:21.186 "nvme_admin": false, 00:21:21.186 "nvme_io": false 00:21:21.186 }, 00:21:21.186 "memory_domains": [ 00:21:21.186 { 00:21:21.186 "dma_device_id": "system", 00:21:21.186 "dma_device_type": 1 00:21:21.186 }, 00:21:21.186 { 00:21:21.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.186 "dma_device_type": 2 00:21:21.186 } 00:21:21.186 ], 00:21:21.186 "driver_specific": { 00:21:21.186 "passthru": { 00:21:21.186 "name": "pt3", 00:21:21.186 "base_bdev_name": "malloc3" 00:21:21.186 } 00:21:21.186 } 00:21:21.186 }' 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:21.186 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:21.445 [2024-05-16 07:34:14.975438] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.445 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b0d9312b-1356-11ef-8e8f-9dd684e56d79 00:21:21.445 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b0d9312b-1356-11ef-8e8f-9dd684e56d79 ']' 00:21:21.445 07:34:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:21.704 [2024-05-16 07:34:15.191411] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.704 [2024-05-16 07:34:15.191431] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.704 [2024-05-16 07:34:15.191446] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.704 [2024-05-16 07:34:15.191458] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.704 [2024-05-16 07:34:15.191462] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8f7400 name raid_bdev1, state offline 00:21:21.704 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.704 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:21.963 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:21.963 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:21.963 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:21.963 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:22.221 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.221 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:22.480 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:22.480 07:34:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:22.739 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:22.739 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:22.997 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:22.997 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:22.997 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:22.997 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:22.997 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:22.998 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:23.256 [2024-05-16 07:34:16.599440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:23.256 [2024-05-16 07:34:16.599907] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:23.256 [2024-05-16 07:34:16.599926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:23.256 [2024-05-16 07:34:16.599939] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:23.256 [2024-05-16 07:34:16.599973] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:23.256 [2024-05-16 07:34:16.599984] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:23.256 [2024-05-16 07:34:16.599992] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.256 [2024-05-16 07:34:16.599996] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8f7180 name raid_bdev1, state configuring 00:21:23.256 request: 00:21:23.256 { 00:21:23.256 "name": "raid_bdev1", 00:21:23.256 "raid_level": "raid0", 00:21:23.256 "base_bdevs": [ 00:21:23.256 "malloc1", 00:21:23.256 "malloc2", 00:21:23.256 "malloc3" 00:21:23.256 ], 00:21:23.256 "superblock": false, 00:21:23.256 "strip_size_kb": 64, 00:21:23.256 "method": "bdev_raid_create", 00:21:23.256 "req_id": 1 00:21:23.256 } 00:21:23.256 Got JSON-RPC error response 00:21:23.256 response: 00:21:23.256 { 00:21:23.256 "code": -17, 00:21:23.256 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:23.256 } 00:21:23.256 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:23.256 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.256 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.256 07:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.256 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.256 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:23.514 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:23.514 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:23.514 07:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:23.514 [2024-05-16 07:34:17.031431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:23.514 [2024-05-16 07:34:17.031508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.514 [2024-05-16 07:34:17.031545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f6c80 00:21:23.514 [2024-05-16 07:34:17.031553] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.514 [2024-05-16 07:34:17.032127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.514 [2024-05-16 07:34:17.032151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:23.514 [2024-05-16 07:34:17.032175] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:23.514 [2024-05-16 07:34:17.032185] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:23.514 pt1 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.514 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.772 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.772 "name": "raid_bdev1", 00:21:23.773 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:23.773 "strip_size_kb": 64, 00:21:23.773 "state": "configuring", 00:21:23.773 "raid_level": "raid0", 00:21:23.773 "superblock": true, 00:21:23.773 "num_base_bdevs": 3, 00:21:23.773 "num_base_bdevs_discovered": 1, 00:21:23.773 "num_base_bdevs_operational": 3, 00:21:23.773 "base_bdevs_list": [ 00:21:23.773 { 00:21:23.773 "name": "pt1", 00:21:23.773 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:23.773 "is_configured": true, 00:21:23.773 "data_offset": 2048, 00:21:23.773 "data_size": 63488 00:21:23.773 }, 00:21:23.773 { 00:21:23.773 "name": null, 00:21:23.773 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:23.773 "is_configured": false, 00:21:23.773 "data_offset": 2048, 00:21:23.773 "data_size": 63488 00:21:23.773 }, 00:21:23.773 { 00:21:23.773 "name": null, 00:21:23.773 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:23.773 "is_configured": false, 00:21:23.773 "data_offset": 2048, 00:21:23.773 "data_size": 63488 00:21:23.773 } 00:21:23.773 ] 00:21:23.773 }' 00:21:23.773 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.773 07:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.031 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:24.031 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:24.289 [2024-05-16 07:34:17.823458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:24.289 [2024-05-16 07:34:17.823528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.289 [2024-05-16 07:34:17.823558] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f7680 00:21:24.289 [2024-05-16 07:34:17.823566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.289 [2024-05-16 07:34:17.823680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.289 [2024-05-16 07:34:17.823690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:24.289 [2024-05-16 07:34:17.823711] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:24.289 [2024-05-16 07:34:17.823719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.289 pt2 00:21:24.289 07:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:24.547 [2024-05-16 07:34:18.047453] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.547 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.806 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:24.806 "name": "raid_bdev1", 00:21:24.806 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:24.806 "strip_size_kb": 64, 00:21:24.806 "state": "configuring", 00:21:24.806 "raid_level": "raid0", 00:21:24.806 "superblock": true, 00:21:24.806 "num_base_bdevs": 3, 00:21:24.806 "num_base_bdevs_discovered": 1, 00:21:24.806 "num_base_bdevs_operational": 3, 00:21:24.806 "base_bdevs_list": [ 00:21:24.806 { 00:21:24.806 "name": "pt1", 00:21:24.806 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:24.806 "is_configured": true, 00:21:24.806 "data_offset": 2048, 00:21:24.806 "data_size": 63488 00:21:24.806 }, 00:21:24.806 { 00:21:24.806 "name": null, 00:21:24.806 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:24.806 "is_configured": false, 00:21:24.806 "data_offset": 2048, 00:21:24.806 "data_size": 63488 00:21:24.806 }, 00:21:24.806 { 00:21:24.806 "name": null, 00:21:24.806 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:24.806 "is_configured": false, 00:21:24.806 "data_offset": 2048, 00:21:24.806 "data_size": 63488 00:21:24.806 } 00:21:24.806 ] 00:21:24.806 }' 00:21:24.806 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:24.806 07:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.372 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:25.372 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:25.372 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:25.372 [2024-05-16 07:34:18.899460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:25.372 [2024-05-16 07:34:18.899524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.372 [2024-05-16 07:34:18.899551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f7680 00:21:25.372 [2024-05-16 07:34:18.899559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.372 [2024-05-16 07:34:18.899654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.372 [2024-05-16 07:34:18.899663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:25.372 [2024-05-16 07:34:18.899685] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:25.372 [2024-05-16 07:34:18.899693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:25.372 pt2 00:21:25.372 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:25.372 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:25.372 07:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:25.938 [2024-05-16 07:34:19.223463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:25.938 [2024-05-16 07:34:19.223526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.938 [2024-05-16 07:34:19.223551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8f7400 00:21:25.938 [2024-05-16 07:34:19.223559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.938 [2024-05-16 07:34:19.223656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.938 [2024-05-16 07:34:19.223665] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:25.938 [2024-05-16 07:34:19.223685] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:25.938 [2024-05-16 07:34:19.223692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:25.938 [2024-05-16 07:34:19.223744] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c8f6780 00:21:25.938 [2024-05-16 07:34:19.223748] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:25.938 [2024-05-16 07:34:19.223768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c959e20 00:21:25.938 [2024-05-16 07:34:19.223812] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c8f6780 00:21:25.938 [2024-05-16 07:34:19.223815] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c8f6780 00:21:25.938 [2024-05-16 07:34:19.223833] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.938 pt3 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.938 "name": "raid_bdev1", 00:21:25.938 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:25.938 "strip_size_kb": 64, 00:21:25.938 "state": "online", 00:21:25.938 "raid_level": "raid0", 00:21:25.938 "superblock": true, 00:21:25.938 "num_base_bdevs": 3, 00:21:25.938 "num_base_bdevs_discovered": 3, 00:21:25.938 "num_base_bdevs_operational": 3, 00:21:25.938 "base_bdevs_list": [ 00:21:25.938 { 00:21:25.938 "name": "pt1", 00:21:25.938 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:25.938 "is_configured": true, 00:21:25.938 "data_offset": 2048, 00:21:25.938 "data_size": 63488 00:21:25.938 }, 00:21:25.938 { 00:21:25.938 "name": "pt2", 00:21:25.938 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:25.938 "is_configured": true, 00:21:25.938 "data_offset": 2048, 00:21:25.938 "data_size": 63488 00:21:25.938 }, 00:21:25.938 { 00:21:25.938 "name": "pt3", 00:21:25.938 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:25.938 "is_configured": true, 00:21:25.938 "data_offset": 2048, 00:21:25.938 "data_size": 63488 00:21:25.938 } 00:21:25.938 ] 00:21:25.938 }' 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.938 07:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:26.505 07:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:26.763 [2024-05-16 07:34:20.087497] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.763 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:26.763 "name": "raid_bdev1", 00:21:26.763 "aliases": [ 00:21:26.763 "b0d9312b-1356-11ef-8e8f-9dd684e56d79" 00:21:26.763 ], 00:21:26.763 "product_name": "Raid Volume", 00:21:26.763 "block_size": 512, 00:21:26.763 "num_blocks": 190464, 00:21:26.763 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:26.763 "assigned_rate_limits": { 00:21:26.763 "rw_ios_per_sec": 0, 00:21:26.763 "rw_mbytes_per_sec": 0, 00:21:26.763 "r_mbytes_per_sec": 0, 00:21:26.763 "w_mbytes_per_sec": 0 00:21:26.763 }, 00:21:26.763 "claimed": false, 00:21:26.763 "zoned": false, 00:21:26.763 "supported_io_types": { 00:21:26.763 "read": true, 00:21:26.763 "write": true, 00:21:26.763 "unmap": true, 00:21:26.763 "write_zeroes": true, 00:21:26.763 "flush": true, 00:21:26.763 "reset": true, 00:21:26.763 "compare": false, 00:21:26.763 "compare_and_write": false, 00:21:26.763 "abort": false, 00:21:26.763 "nvme_admin": false, 00:21:26.763 "nvme_io": false 00:21:26.763 }, 00:21:26.763 "memory_domains": [ 00:21:26.763 { 00:21:26.763 "dma_device_id": "system", 00:21:26.763 "dma_device_type": 1 00:21:26.763 }, 00:21:26.763 { 00:21:26.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.763 "dma_device_type": 2 00:21:26.763 }, 00:21:26.763 { 00:21:26.763 "dma_device_id": "system", 00:21:26.763 "dma_device_type": 1 00:21:26.763 }, 00:21:26.763 { 00:21:26.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.763 "dma_device_type": 2 00:21:26.763 }, 00:21:26.763 { 00:21:26.763 "dma_device_id": "system", 00:21:26.763 "dma_device_type": 1 00:21:26.763 }, 00:21:26.763 { 00:21:26.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.763 "dma_device_type": 2 00:21:26.763 } 00:21:26.763 ], 00:21:26.763 "driver_specific": { 00:21:26.763 "raid": { 00:21:26.763 "uuid": "b0d9312b-1356-11ef-8e8f-9dd684e56d79", 00:21:26.763 "strip_size_kb": 64, 00:21:26.763 "state": "online", 00:21:26.763 "raid_level": "raid0", 00:21:26.763 "superblock": true, 00:21:26.763 "num_base_bdevs": 3, 00:21:26.763 "num_base_bdevs_discovered": 3, 00:21:26.764 "num_base_bdevs_operational": 3, 00:21:26.764 "base_bdevs_list": [ 00:21:26.764 { 00:21:26.764 "name": "pt1", 00:21:26.764 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:26.764 "is_configured": true, 00:21:26.764 "data_offset": 2048, 00:21:26.764 "data_size": 63488 00:21:26.764 }, 00:21:26.764 { 00:21:26.764 "name": "pt2", 00:21:26.764 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:26.764 "is_configured": true, 00:21:26.764 "data_offset": 2048, 00:21:26.764 "data_size": 63488 00:21:26.764 }, 00:21:26.764 { 00:21:26.764 "name": "pt3", 00:21:26.764 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:26.764 "is_configured": true, 00:21:26.764 "data_offset": 2048, 00:21:26.764 "data_size": 63488 00:21:26.764 } 00:21:26.764 ] 00:21:26.764 } 00:21:26.764 } 00:21:26.764 }' 00:21:26.764 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:26.764 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:21:26.764 pt2 00:21:26.764 pt3' 00:21:26.764 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:26.764 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:26.764 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:27.021 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:27.021 "name": "pt1", 00:21:27.021 "aliases": [ 00:21:27.021 "ab5208db-dacb-3951-9888-bbdad2f98ff1" 00:21:27.021 ], 00:21:27.021 "product_name": "passthru", 00:21:27.021 "block_size": 512, 00:21:27.021 "num_blocks": 65536, 00:21:27.022 "uuid": "ab5208db-dacb-3951-9888-bbdad2f98ff1", 00:21:27.022 "assigned_rate_limits": { 00:21:27.022 "rw_ios_per_sec": 0, 00:21:27.022 "rw_mbytes_per_sec": 0, 00:21:27.022 "r_mbytes_per_sec": 0, 00:21:27.022 "w_mbytes_per_sec": 0 00:21:27.022 }, 00:21:27.022 "claimed": true, 00:21:27.022 "claim_type": "exclusive_write", 00:21:27.022 "zoned": false, 00:21:27.022 "supported_io_types": { 00:21:27.022 "read": true, 00:21:27.022 "write": true, 00:21:27.022 "unmap": true, 00:21:27.022 "write_zeroes": true, 00:21:27.022 "flush": true, 00:21:27.022 "reset": true, 00:21:27.022 "compare": false, 00:21:27.022 "compare_and_write": false, 00:21:27.022 "abort": true, 00:21:27.022 "nvme_admin": false, 00:21:27.022 "nvme_io": false 00:21:27.022 }, 00:21:27.022 "memory_domains": [ 00:21:27.022 { 00:21:27.022 "dma_device_id": "system", 00:21:27.022 "dma_device_type": 1 00:21:27.022 }, 00:21:27.022 { 00:21:27.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.022 "dma_device_type": 2 00:21:27.022 } 00:21:27.022 ], 00:21:27.022 "driver_specific": { 00:21:27.022 "passthru": { 00:21:27.022 "name": "pt1", 00:21:27.022 "base_bdev_name": "malloc1" 00:21:27.022 } 00:21:27.022 } 00:21:27.022 }' 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:27.022 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:27.280 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:27.280 "name": "pt2", 00:21:27.280 "aliases": [ 00:21:27.280 "0aee64e5-47b8-7552-a331-5186cc58f4d3" 00:21:27.280 ], 00:21:27.280 "product_name": "passthru", 00:21:27.280 "block_size": 512, 00:21:27.280 "num_blocks": 65536, 00:21:27.280 "uuid": "0aee64e5-47b8-7552-a331-5186cc58f4d3", 00:21:27.280 "assigned_rate_limits": { 00:21:27.280 "rw_ios_per_sec": 0, 00:21:27.280 "rw_mbytes_per_sec": 0, 00:21:27.280 "r_mbytes_per_sec": 0, 00:21:27.280 "w_mbytes_per_sec": 0 00:21:27.280 }, 00:21:27.280 "claimed": true, 00:21:27.280 "claim_type": "exclusive_write", 00:21:27.280 "zoned": false, 00:21:27.280 "supported_io_types": { 00:21:27.280 "read": true, 00:21:27.280 "write": true, 00:21:27.280 "unmap": true, 00:21:27.280 "write_zeroes": true, 00:21:27.280 "flush": true, 00:21:27.280 "reset": true, 00:21:27.280 "compare": false, 00:21:27.280 "compare_and_write": false, 00:21:27.280 "abort": true, 00:21:27.280 "nvme_admin": false, 00:21:27.280 "nvme_io": false 00:21:27.280 }, 00:21:27.280 "memory_domains": [ 00:21:27.280 { 00:21:27.281 "dma_device_id": "system", 00:21:27.281 "dma_device_type": 1 00:21:27.281 }, 00:21:27.281 { 00:21:27.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.281 "dma_device_type": 2 00:21:27.281 } 00:21:27.281 ], 00:21:27.281 "driver_specific": { 00:21:27.281 "passthru": { 00:21:27.281 "name": "pt2", 00:21:27.281 "base_bdev_name": "malloc2" 00:21:27.281 } 00:21:27.281 } 00:21:27.281 }' 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:27.281 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:27.539 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:27.539 "name": "pt3", 00:21:27.539 "aliases": [ 00:21:27.539 "fe6db704-898a-7f5d-b67f-eb1d420b2eb2" 00:21:27.539 ], 00:21:27.539 "product_name": "passthru", 00:21:27.539 "block_size": 512, 00:21:27.539 "num_blocks": 65536, 00:21:27.539 "uuid": "fe6db704-898a-7f5d-b67f-eb1d420b2eb2", 00:21:27.539 "assigned_rate_limits": { 00:21:27.539 "rw_ios_per_sec": 0, 00:21:27.539 "rw_mbytes_per_sec": 0, 00:21:27.539 "r_mbytes_per_sec": 0, 00:21:27.539 "w_mbytes_per_sec": 0 00:21:27.539 }, 00:21:27.539 "claimed": true, 00:21:27.539 "claim_type": "exclusive_write", 00:21:27.539 "zoned": false, 00:21:27.539 "supported_io_types": { 00:21:27.539 "read": true, 00:21:27.539 "write": true, 00:21:27.539 "unmap": true, 00:21:27.539 "write_zeroes": true, 00:21:27.539 "flush": true, 00:21:27.539 "reset": true, 00:21:27.539 "compare": false, 00:21:27.539 "compare_and_write": false, 00:21:27.539 "abort": true, 00:21:27.539 "nvme_admin": false, 00:21:27.539 "nvme_io": false 00:21:27.539 }, 00:21:27.539 "memory_domains": [ 00:21:27.539 { 00:21:27.539 "dma_device_id": "system", 00:21:27.539 "dma_device_type": 1 00:21:27.539 }, 00:21:27.539 { 00:21:27.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.539 "dma_device_type": 2 00:21:27.539 } 00:21:27.539 ], 00:21:27.539 "driver_specific": { 00:21:27.539 "passthru": { 00:21:27.539 "name": "pt3", 00:21:27.539 "base_bdev_name": "malloc3" 00:21:27.539 } 00:21:27.539 } 00:21:27.539 }' 00:21:27.539 07:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:27.539 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:27.540 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:27.540 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:27.540 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:27.798 [2024-05-16 07:34:21.323516] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b0d9312b-1356-11ef-8e8f-9dd684e56d79 '!=' b0d9312b-1356-11ef-8e8f-9dd684e56d79 ']' 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 53498 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 53498 ']' 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 53498 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:21:27.798 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 53498 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:28.058 killing process with pid 53498 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53498' 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 53498 00:21:28.058 [2024-05-16 07:34:21.358170] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.058 [2024-05-16 07:34:21.358213] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.058 [2024-05-16 07:34:21.358228] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.058 [2024-05-16 07:34:21.358234] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c8f6780 name raid_bdev1, state offline 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 53498 00:21:28.058 [2024-05-16 07:34:21.372655] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:28.058 00:21:28.058 real 0m11.787s 00:21:28.058 user 0m20.962s 00:21:28.058 sys 0m1.870s 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:28.058 07:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.058 ************************************ 00:21:28.058 END TEST raid_superblock_test 00:21:28.058 ************************************ 00:21:28.058 07:34:21 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:21:28.058 07:34:21 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:21:28.058 07:34:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:28.058 07:34:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:28.058 07:34:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.058 ************************************ 00:21:28.058 START TEST raid_state_function_test 00:21:28.058 ************************************ 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=53851 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 53851' 00:21:28.058 Process raid pid: 53851 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 53851 /var/tmp/spdk-raid.sock 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 53851 ']' 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:28.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:28.058 07:34:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.058 [2024-05-16 07:34:21.597765] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:28.058 [2024-05-16 07:34:21.598019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:28.625 EAL: TSC is not safe to use in SMP mode 00:21:28.626 EAL: TSC is not invariant 00:21:28.626 [2024-05-16 07:34:22.080366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.626 [2024-05-16 07:34:22.161323] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:28.626 [2024-05-16 07:34:22.163483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.626 [2024-05-16 07:34:22.164265] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.626 [2024-05-16 07:34:22.164280] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.193 07:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.193 07:34:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:21:29.193 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:29.452 [2024-05-16 07:34:22.826471] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.452 [2024-05-16 07:34:22.826526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.452 [2024-05-16 07:34:22.826531] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.452 [2024-05-16 07:34:22.826539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.452 [2024-05-16 07:34:22.826543] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:29.452 [2024-05-16 07:34:22.826549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.452 07:34:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.711 07:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.711 "name": "Existed_Raid", 00:21:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.711 "strip_size_kb": 64, 00:21:29.711 "state": "configuring", 00:21:29.711 "raid_level": "concat", 00:21:29.711 "superblock": false, 00:21:29.711 "num_base_bdevs": 3, 00:21:29.711 "num_base_bdevs_discovered": 0, 00:21:29.711 "num_base_bdevs_operational": 3, 00:21:29.711 "base_bdevs_list": [ 00:21:29.711 { 00:21:29.711 "name": "BaseBdev1", 00:21:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.711 "is_configured": false, 00:21:29.711 "data_offset": 0, 00:21:29.711 "data_size": 0 00:21:29.711 }, 00:21:29.711 { 00:21:29.711 "name": "BaseBdev2", 00:21:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.711 "is_configured": false, 00:21:29.711 "data_offset": 0, 00:21:29.711 "data_size": 0 00:21:29.711 }, 00:21:29.711 { 00:21:29.711 "name": "BaseBdev3", 00:21:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.711 "is_configured": false, 00:21:29.711 "data_offset": 0, 00:21:29.711 "data_size": 0 00:21:29.711 } 00:21:29.711 ] 00:21:29.711 }' 00:21:29.711 07:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.711 07:34:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.970 07:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:30.228 [2024-05-16 07:34:23.778464] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:30.228 [2024-05-16 07:34:23.778494] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a151500 name Existed_Raid, state configuring 00:21:30.485 07:34:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:30.485 [2024-05-16 07:34:23.994460] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.485 [2024-05-16 07:34:23.994512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.485 [2024-05-16 07:34:23.994516] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.485 [2024-05-16 07:34:23.994524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.485 [2024-05-16 07:34:23.994527] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:30.485 [2024-05-16 07:34:23.994533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:30.485 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:30.743 [2024-05-16 07:34:24.223410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.743 BaseBdev1 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:30.743 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:31.002 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:31.568 [ 00:21:31.568 { 00:21:31.568 "name": "BaseBdev1", 00:21:31.568 "aliases": [ 00:21:31.568 "b7b8b260-1356-11ef-8e8f-9dd684e56d79" 00:21:31.568 ], 00:21:31.568 "product_name": "Malloc disk", 00:21:31.568 "block_size": 512, 00:21:31.568 "num_blocks": 65536, 00:21:31.568 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:31.568 "assigned_rate_limits": { 00:21:31.568 "rw_ios_per_sec": 0, 00:21:31.568 "rw_mbytes_per_sec": 0, 00:21:31.568 "r_mbytes_per_sec": 0, 00:21:31.568 "w_mbytes_per_sec": 0 00:21:31.568 }, 00:21:31.568 "claimed": true, 00:21:31.568 "claim_type": "exclusive_write", 00:21:31.568 "zoned": false, 00:21:31.568 "supported_io_types": { 00:21:31.568 "read": true, 00:21:31.568 "write": true, 00:21:31.568 "unmap": true, 00:21:31.568 "write_zeroes": true, 00:21:31.568 "flush": true, 00:21:31.568 "reset": true, 00:21:31.568 "compare": false, 00:21:31.568 "compare_and_write": false, 00:21:31.568 "abort": true, 00:21:31.568 "nvme_admin": false, 00:21:31.568 "nvme_io": false 00:21:31.568 }, 00:21:31.568 "memory_domains": [ 00:21:31.568 { 00:21:31.568 "dma_device_id": "system", 00:21:31.568 "dma_device_type": 1 00:21:31.568 }, 00:21:31.568 { 00:21:31.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.568 "dma_device_type": 2 00:21:31.568 } 00:21:31.568 ], 00:21:31.568 "driver_specific": {} 00:21:31.568 } 00:21:31.568 ] 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.568 07:34:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.826 07:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.826 "name": "Existed_Raid", 00:21:31.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.826 "strip_size_kb": 64, 00:21:31.826 "state": "configuring", 00:21:31.826 "raid_level": "concat", 00:21:31.826 "superblock": false, 00:21:31.826 "num_base_bdevs": 3, 00:21:31.826 "num_base_bdevs_discovered": 1, 00:21:31.826 "num_base_bdevs_operational": 3, 00:21:31.826 "base_bdevs_list": [ 00:21:31.826 { 00:21:31.826 "name": "BaseBdev1", 00:21:31.826 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:31.826 "is_configured": true, 00:21:31.826 "data_offset": 0, 00:21:31.826 "data_size": 65536 00:21:31.826 }, 00:21:31.826 { 00:21:31.826 "name": "BaseBdev2", 00:21:31.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.826 "is_configured": false, 00:21:31.826 "data_offset": 0, 00:21:31.826 "data_size": 0 00:21:31.826 }, 00:21:31.826 { 00:21:31.826 "name": "BaseBdev3", 00:21:31.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.826 "is_configured": false, 00:21:31.826 "data_offset": 0, 00:21:31.826 "data_size": 0 00:21:31.826 } 00:21:31.826 ] 00:21:31.826 }' 00:21:31.826 07:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.826 07:34:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.084 07:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:32.343 [2024-05-16 07:34:25.642463] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:32.343 [2024-05-16 07:34:25.642495] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a151500 name Existed_Raid, state configuring 00:21:32.343 07:34:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:32.603 [2024-05-16 07:34:25.982498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.603 [2024-05-16 07:34:25.983220] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:32.603 [2024-05-16 07:34:25.983273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:32.603 [2024-05-16 07:34:25.983282] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:32.604 [2024-05-16 07:34:25.983298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.604 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.863 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:32.863 "name": "Existed_Raid", 00:21:32.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.863 "strip_size_kb": 64, 00:21:32.863 "state": "configuring", 00:21:32.863 "raid_level": "concat", 00:21:32.863 "superblock": false, 00:21:32.863 "num_base_bdevs": 3, 00:21:32.863 "num_base_bdevs_discovered": 1, 00:21:32.863 "num_base_bdevs_operational": 3, 00:21:32.863 "base_bdevs_list": [ 00:21:32.863 { 00:21:32.863 "name": "BaseBdev1", 00:21:32.863 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:32.863 "is_configured": true, 00:21:32.863 "data_offset": 0, 00:21:32.863 "data_size": 65536 00:21:32.863 }, 00:21:32.863 { 00:21:32.863 "name": "BaseBdev2", 00:21:32.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.863 "is_configured": false, 00:21:32.863 "data_offset": 0, 00:21:32.863 "data_size": 0 00:21:32.863 }, 00:21:32.863 { 00:21:32.863 "name": "BaseBdev3", 00:21:32.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.863 "is_configured": false, 00:21:32.863 "data_offset": 0, 00:21:32.863 "data_size": 0 00:21:32.863 } 00:21:32.863 ] 00:21:32.863 }' 00:21:32.863 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:32.863 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.122 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:33.380 [2024-05-16 07:34:26.714635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.380 BaseBdev2 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:33.380 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:33.638 07:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:33.897 [ 00:21:33.897 { 00:21:33.897 "name": "BaseBdev2", 00:21:33.897 "aliases": [ 00:21:33.897 "b934f423-1356-11ef-8e8f-9dd684e56d79" 00:21:33.897 ], 00:21:33.897 "product_name": "Malloc disk", 00:21:33.897 "block_size": 512, 00:21:33.897 "num_blocks": 65536, 00:21:33.897 "uuid": "b934f423-1356-11ef-8e8f-9dd684e56d79", 00:21:33.897 "assigned_rate_limits": { 00:21:33.897 "rw_ios_per_sec": 0, 00:21:33.897 "rw_mbytes_per_sec": 0, 00:21:33.897 "r_mbytes_per_sec": 0, 00:21:33.897 "w_mbytes_per_sec": 0 00:21:33.897 }, 00:21:33.897 "claimed": true, 00:21:33.897 "claim_type": "exclusive_write", 00:21:33.897 "zoned": false, 00:21:33.897 "supported_io_types": { 00:21:33.897 "read": true, 00:21:33.897 "write": true, 00:21:33.897 "unmap": true, 00:21:33.897 "write_zeroes": true, 00:21:33.897 "flush": true, 00:21:33.897 "reset": true, 00:21:33.897 "compare": false, 00:21:33.897 "compare_and_write": false, 00:21:33.897 "abort": true, 00:21:33.897 "nvme_admin": false, 00:21:33.897 "nvme_io": false 00:21:33.897 }, 00:21:33.897 "memory_domains": [ 00:21:33.897 { 00:21:33.897 "dma_device_id": "system", 00:21:33.897 "dma_device_type": 1 00:21:33.897 }, 00:21:33.897 { 00:21:33.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.897 "dma_device_type": 2 00:21:33.897 } 00:21:33.897 ], 00:21:33.897 "driver_specific": {} 00:21:33.897 } 00:21:33.897 ] 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.897 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.155 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.155 "name": "Existed_Raid", 00:21:34.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.155 "strip_size_kb": 64, 00:21:34.155 "state": "configuring", 00:21:34.155 "raid_level": "concat", 00:21:34.155 "superblock": false, 00:21:34.155 "num_base_bdevs": 3, 00:21:34.155 "num_base_bdevs_discovered": 2, 00:21:34.155 "num_base_bdevs_operational": 3, 00:21:34.155 "base_bdevs_list": [ 00:21:34.155 { 00:21:34.155 "name": "BaseBdev1", 00:21:34.155 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:34.155 "is_configured": true, 00:21:34.155 "data_offset": 0, 00:21:34.155 "data_size": 65536 00:21:34.155 }, 00:21:34.155 { 00:21:34.155 "name": "BaseBdev2", 00:21:34.155 "uuid": "b934f423-1356-11ef-8e8f-9dd684e56d79", 00:21:34.155 "is_configured": true, 00:21:34.155 "data_offset": 0, 00:21:34.155 "data_size": 65536 00:21:34.155 }, 00:21:34.155 { 00:21:34.155 "name": "BaseBdev3", 00:21:34.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.155 "is_configured": false, 00:21:34.155 "data_offset": 0, 00:21:34.155 "data_size": 0 00:21:34.155 } 00:21:34.155 ] 00:21:34.155 }' 00:21:34.155 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.155 07:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.413 07:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:34.670 [2024-05-16 07:34:28.050654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.670 [2024-05-16 07:34:28.050681] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a151a00 00:21:34.670 [2024-05-16 07:34:28.050685] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:34.670 [2024-05-16 07:34:28.050706] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a1b4ec0 00:21:34.670 [2024-05-16 07:34:28.050790] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a151a00 00:21:34.670 [2024-05-16 07:34:28.050794] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a151a00 00:21:34.670 [2024-05-16 07:34:28.050829] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.670 BaseBdev3 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:34.670 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.947 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:34.947 [ 00:21:34.947 { 00:21:34.947 "name": "BaseBdev3", 00:21:34.947 "aliases": [ 00:21:34.947 "ba00d0d7-1356-11ef-8e8f-9dd684e56d79" 00:21:34.947 ], 00:21:34.947 "product_name": "Malloc disk", 00:21:34.947 "block_size": 512, 00:21:34.947 "num_blocks": 65536, 00:21:34.947 "uuid": "ba00d0d7-1356-11ef-8e8f-9dd684e56d79", 00:21:34.947 "assigned_rate_limits": { 00:21:34.947 "rw_ios_per_sec": 0, 00:21:34.947 "rw_mbytes_per_sec": 0, 00:21:34.947 "r_mbytes_per_sec": 0, 00:21:34.947 "w_mbytes_per_sec": 0 00:21:34.947 }, 00:21:34.947 "claimed": true, 00:21:34.947 "claim_type": "exclusive_write", 00:21:34.947 "zoned": false, 00:21:34.947 "supported_io_types": { 00:21:34.947 "read": true, 00:21:34.947 "write": true, 00:21:34.947 "unmap": true, 00:21:34.947 "write_zeroes": true, 00:21:34.947 "flush": true, 00:21:34.947 "reset": true, 00:21:34.947 "compare": false, 00:21:34.947 "compare_and_write": false, 00:21:34.947 "abort": true, 00:21:34.947 "nvme_admin": false, 00:21:34.947 "nvme_io": false 00:21:34.947 }, 00:21:34.947 "memory_domains": [ 00:21:34.947 { 00:21:34.947 "dma_device_id": "system", 00:21:34.947 "dma_device_type": 1 00:21:34.947 }, 00:21:34.947 { 00:21:34.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.947 "dma_device_type": 2 00:21:34.947 } 00:21:34.947 ], 00:21:34.947 "driver_specific": {} 00:21:34.947 } 00:21:34.947 ] 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.222 "name": "Existed_Raid", 00:21:35.222 "uuid": "ba00d618-1356-11ef-8e8f-9dd684e56d79", 00:21:35.222 "strip_size_kb": 64, 00:21:35.222 "state": "online", 00:21:35.222 "raid_level": "concat", 00:21:35.222 "superblock": false, 00:21:35.222 "num_base_bdevs": 3, 00:21:35.222 "num_base_bdevs_discovered": 3, 00:21:35.222 "num_base_bdevs_operational": 3, 00:21:35.222 "base_bdevs_list": [ 00:21:35.222 { 00:21:35.222 "name": "BaseBdev1", 00:21:35.222 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:35.222 "is_configured": true, 00:21:35.222 "data_offset": 0, 00:21:35.222 "data_size": 65536 00:21:35.222 }, 00:21:35.222 { 00:21:35.222 "name": "BaseBdev2", 00:21:35.222 "uuid": "b934f423-1356-11ef-8e8f-9dd684e56d79", 00:21:35.222 "is_configured": true, 00:21:35.222 "data_offset": 0, 00:21:35.222 "data_size": 65536 00:21:35.222 }, 00:21:35.222 { 00:21:35.222 "name": "BaseBdev3", 00:21:35.222 "uuid": "ba00d0d7-1356-11ef-8e8f-9dd684e56d79", 00:21:35.222 "is_configured": true, 00:21:35.222 "data_offset": 0, 00:21:35.222 "data_size": 65536 00:21:35.222 } 00:21:35.222 ] 00:21:35.222 }' 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.222 07:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:35.789 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:36.047 [2024-05-16 07:34:29.402598] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.047 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:36.047 "name": "Existed_Raid", 00:21:36.047 "aliases": [ 00:21:36.047 "ba00d618-1356-11ef-8e8f-9dd684e56d79" 00:21:36.047 ], 00:21:36.047 "product_name": "Raid Volume", 00:21:36.047 "block_size": 512, 00:21:36.047 "num_blocks": 196608, 00:21:36.047 "uuid": "ba00d618-1356-11ef-8e8f-9dd684e56d79", 00:21:36.047 "assigned_rate_limits": { 00:21:36.047 "rw_ios_per_sec": 0, 00:21:36.047 "rw_mbytes_per_sec": 0, 00:21:36.047 "r_mbytes_per_sec": 0, 00:21:36.047 "w_mbytes_per_sec": 0 00:21:36.047 }, 00:21:36.047 "claimed": false, 00:21:36.047 "zoned": false, 00:21:36.047 "supported_io_types": { 00:21:36.047 "read": true, 00:21:36.047 "write": true, 00:21:36.047 "unmap": true, 00:21:36.047 "write_zeroes": true, 00:21:36.048 "flush": true, 00:21:36.048 "reset": true, 00:21:36.048 "compare": false, 00:21:36.048 "compare_and_write": false, 00:21:36.048 "abort": false, 00:21:36.048 "nvme_admin": false, 00:21:36.048 "nvme_io": false 00:21:36.048 }, 00:21:36.048 "memory_domains": [ 00:21:36.048 { 00:21:36.048 "dma_device_id": "system", 00:21:36.048 "dma_device_type": 1 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.048 "dma_device_type": 2 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "dma_device_id": "system", 00:21:36.048 "dma_device_type": 1 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.048 "dma_device_type": 2 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "dma_device_id": "system", 00:21:36.048 "dma_device_type": 1 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.048 "dma_device_type": 2 00:21:36.048 } 00:21:36.048 ], 00:21:36.048 "driver_specific": { 00:21:36.048 "raid": { 00:21:36.048 "uuid": "ba00d618-1356-11ef-8e8f-9dd684e56d79", 00:21:36.048 "strip_size_kb": 64, 00:21:36.048 "state": "online", 00:21:36.048 "raid_level": "concat", 00:21:36.048 "superblock": false, 00:21:36.048 "num_base_bdevs": 3, 00:21:36.048 "num_base_bdevs_discovered": 3, 00:21:36.048 "num_base_bdevs_operational": 3, 00:21:36.048 "base_bdevs_list": [ 00:21:36.048 { 00:21:36.048 "name": "BaseBdev1", 00:21:36.048 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:36.048 "is_configured": true, 00:21:36.048 "data_offset": 0, 00:21:36.048 "data_size": 65536 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "name": "BaseBdev2", 00:21:36.048 "uuid": "b934f423-1356-11ef-8e8f-9dd684e56d79", 00:21:36.048 "is_configured": true, 00:21:36.048 "data_offset": 0, 00:21:36.048 "data_size": 65536 00:21:36.048 }, 00:21:36.048 { 00:21:36.048 "name": "BaseBdev3", 00:21:36.048 "uuid": "ba00d0d7-1356-11ef-8e8f-9dd684e56d79", 00:21:36.048 "is_configured": true, 00:21:36.048 "data_offset": 0, 00:21:36.048 "data_size": 65536 00:21:36.048 } 00:21:36.048 ] 00:21:36.048 } 00:21:36.048 } 00:21:36.048 }' 00:21:36.048 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:36.048 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:21:36.048 BaseBdev2 00:21:36.048 BaseBdev3' 00:21:36.048 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:36.048 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:36.048 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:36.307 "name": "BaseBdev1", 00:21:36.307 "aliases": [ 00:21:36.307 "b7b8b260-1356-11ef-8e8f-9dd684e56d79" 00:21:36.307 ], 00:21:36.307 "product_name": "Malloc disk", 00:21:36.307 "block_size": 512, 00:21:36.307 "num_blocks": 65536, 00:21:36.307 "uuid": "b7b8b260-1356-11ef-8e8f-9dd684e56d79", 00:21:36.307 "assigned_rate_limits": { 00:21:36.307 "rw_ios_per_sec": 0, 00:21:36.307 "rw_mbytes_per_sec": 0, 00:21:36.307 "r_mbytes_per_sec": 0, 00:21:36.307 "w_mbytes_per_sec": 0 00:21:36.307 }, 00:21:36.307 "claimed": true, 00:21:36.307 "claim_type": "exclusive_write", 00:21:36.307 "zoned": false, 00:21:36.307 "supported_io_types": { 00:21:36.307 "read": true, 00:21:36.307 "write": true, 00:21:36.307 "unmap": true, 00:21:36.307 "write_zeroes": true, 00:21:36.307 "flush": true, 00:21:36.307 "reset": true, 00:21:36.307 "compare": false, 00:21:36.307 "compare_and_write": false, 00:21:36.307 "abort": true, 00:21:36.307 "nvme_admin": false, 00:21:36.307 "nvme_io": false 00:21:36.307 }, 00:21:36.307 "memory_domains": [ 00:21:36.307 { 00:21:36.307 "dma_device_id": "system", 00:21:36.307 "dma_device_type": 1 00:21:36.307 }, 00:21:36.307 { 00:21:36.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.307 "dma_device_type": 2 00:21:36.307 } 00:21:36.307 ], 00:21:36.307 "driver_specific": {} 00:21:36.307 }' 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:36.307 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:36.565 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:36.566 "name": "BaseBdev2", 00:21:36.566 "aliases": [ 00:21:36.566 "b934f423-1356-11ef-8e8f-9dd684e56d79" 00:21:36.566 ], 00:21:36.566 "product_name": "Malloc disk", 00:21:36.566 "block_size": 512, 00:21:36.566 "num_blocks": 65536, 00:21:36.566 "uuid": "b934f423-1356-11ef-8e8f-9dd684e56d79", 00:21:36.566 "assigned_rate_limits": { 00:21:36.566 "rw_ios_per_sec": 0, 00:21:36.566 "rw_mbytes_per_sec": 0, 00:21:36.566 "r_mbytes_per_sec": 0, 00:21:36.566 "w_mbytes_per_sec": 0 00:21:36.566 }, 00:21:36.566 "claimed": true, 00:21:36.566 "claim_type": "exclusive_write", 00:21:36.566 "zoned": false, 00:21:36.566 "supported_io_types": { 00:21:36.566 "read": true, 00:21:36.566 "write": true, 00:21:36.566 "unmap": true, 00:21:36.566 "write_zeroes": true, 00:21:36.566 "flush": true, 00:21:36.566 "reset": true, 00:21:36.566 "compare": false, 00:21:36.566 "compare_and_write": false, 00:21:36.566 "abort": true, 00:21:36.566 "nvme_admin": false, 00:21:36.566 "nvme_io": false 00:21:36.566 }, 00:21:36.566 "memory_domains": [ 00:21:36.566 { 00:21:36.566 "dma_device_id": "system", 00:21:36.566 "dma_device_type": 1 00:21:36.566 }, 00:21:36.566 { 00:21:36.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.566 "dma_device_type": 2 00:21:36.566 } 00:21:36.566 ], 00:21:36.566 "driver_specific": {} 00:21:36.566 }' 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:36.566 07:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:36.566 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:36.824 "name": "BaseBdev3", 00:21:36.824 "aliases": [ 00:21:36.824 "ba00d0d7-1356-11ef-8e8f-9dd684e56d79" 00:21:36.824 ], 00:21:36.824 "product_name": "Malloc disk", 00:21:36.824 "block_size": 512, 00:21:36.824 "num_blocks": 65536, 00:21:36.824 "uuid": "ba00d0d7-1356-11ef-8e8f-9dd684e56d79", 00:21:36.824 "assigned_rate_limits": { 00:21:36.824 "rw_ios_per_sec": 0, 00:21:36.824 "rw_mbytes_per_sec": 0, 00:21:36.824 "r_mbytes_per_sec": 0, 00:21:36.824 "w_mbytes_per_sec": 0 00:21:36.824 }, 00:21:36.824 "claimed": true, 00:21:36.824 "claim_type": "exclusive_write", 00:21:36.824 "zoned": false, 00:21:36.824 "supported_io_types": { 00:21:36.824 "read": true, 00:21:36.824 "write": true, 00:21:36.824 "unmap": true, 00:21:36.824 "write_zeroes": true, 00:21:36.824 "flush": true, 00:21:36.824 "reset": true, 00:21:36.824 "compare": false, 00:21:36.824 "compare_and_write": false, 00:21:36.824 "abort": true, 00:21:36.824 "nvme_admin": false, 00:21:36.824 "nvme_io": false 00:21:36.824 }, 00:21:36.824 "memory_domains": [ 00:21:36.824 { 00:21:36.824 "dma_device_id": "system", 00:21:36.824 "dma_device_type": 1 00:21:36.824 }, 00:21:36.824 { 00:21:36.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.824 "dma_device_type": 2 00:21:36.824 } 00:21:36.824 ], 00:21:36.824 "driver_specific": {} 00:21:36.824 }' 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:36.824 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:36.825 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:36.825 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:36.825 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:36.825 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:37.083 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:37.083 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:37.342 [2024-05-16 07:34:30.702587] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:37.342 [2024-05-16 07:34:30.702614] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.342 [2024-05-16 07:34:30.702627] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.342 07:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.601 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:37.601 "name": "Existed_Raid", 00:21:37.601 "uuid": "ba00d618-1356-11ef-8e8f-9dd684e56d79", 00:21:37.601 "strip_size_kb": 64, 00:21:37.601 "state": "offline", 00:21:37.601 "raid_level": "concat", 00:21:37.601 "superblock": false, 00:21:37.601 "num_base_bdevs": 3, 00:21:37.601 "num_base_bdevs_discovered": 2, 00:21:37.601 "num_base_bdevs_operational": 2, 00:21:37.601 "base_bdevs_list": [ 00:21:37.601 { 00:21:37.601 "name": null, 00:21:37.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.601 "is_configured": false, 00:21:37.601 "data_offset": 0, 00:21:37.601 "data_size": 65536 00:21:37.601 }, 00:21:37.601 { 00:21:37.601 "name": "BaseBdev2", 00:21:37.601 "uuid": "b934f423-1356-11ef-8e8f-9dd684e56d79", 00:21:37.601 "is_configured": true, 00:21:37.601 "data_offset": 0, 00:21:37.601 "data_size": 65536 00:21:37.601 }, 00:21:37.601 { 00:21:37.601 "name": "BaseBdev3", 00:21:37.601 "uuid": "ba00d0d7-1356-11ef-8e8f-9dd684e56d79", 00:21:37.601 "is_configured": true, 00:21:37.601 "data_offset": 0, 00:21:37.601 "data_size": 65536 00:21:37.601 } 00:21:37.601 ] 00:21:37.601 }' 00:21:37.601 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:37.601 07:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.859 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:37.859 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:37.859 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.859 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:38.426 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:38.426 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:38.426 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:38.426 [2024-05-16 07:34:31.887440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:38.427 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:38.427 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:38.427 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:21:38.427 07:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.692 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:21:38.692 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:38.692 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:38.965 [2024-05-16 07:34:32.376261] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:38.965 [2024-05-16 07:34:32.376299] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a151a00 name Existed_Raid, state offline 00:21:38.965 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:38.965 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:38.965 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.965 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:21:39.223 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:21:39.223 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:21:39.223 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:21:39.223 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:21:39.223 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:39.223 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:39.482 BaseBdev2 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:39.482 07:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.740 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.999 [ 00:21:39.999 { 00:21:39.999 "name": "BaseBdev2", 00:21:39.999 "aliases": [ 00:21:39.999 "bce89ee8-1356-11ef-8e8f-9dd684e56d79" 00:21:39.999 ], 00:21:39.999 "product_name": "Malloc disk", 00:21:39.999 "block_size": 512, 00:21:39.999 "num_blocks": 65536, 00:21:39.999 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:39.999 "assigned_rate_limits": { 00:21:39.999 "rw_ios_per_sec": 0, 00:21:39.999 "rw_mbytes_per_sec": 0, 00:21:39.999 "r_mbytes_per_sec": 0, 00:21:39.999 "w_mbytes_per_sec": 0 00:21:39.999 }, 00:21:39.999 "claimed": false, 00:21:39.999 "zoned": false, 00:21:39.999 "supported_io_types": { 00:21:39.999 "read": true, 00:21:39.999 "write": true, 00:21:39.999 "unmap": true, 00:21:39.999 "write_zeroes": true, 00:21:39.999 "flush": true, 00:21:39.999 "reset": true, 00:21:39.999 "compare": false, 00:21:39.999 "compare_and_write": false, 00:21:39.999 "abort": true, 00:21:39.999 "nvme_admin": false, 00:21:39.999 "nvme_io": false 00:21:39.999 }, 00:21:39.999 "memory_domains": [ 00:21:39.999 { 00:21:39.999 "dma_device_id": "system", 00:21:39.999 "dma_device_type": 1 00:21:39.999 }, 00:21:39.999 { 00:21:39.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.999 "dma_device_type": 2 00:21:39.999 } 00:21:39.999 ], 00:21:39.999 "driver_specific": {} 00:21:39.999 } 00:21:39.999 ] 00:21:39.999 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:39.999 07:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:39.999 07:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:39.999 07:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:40.257 BaseBdev3 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:40.257 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.514 07:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:40.514 [ 00:21:40.514 { 00:21:40.514 "name": "BaseBdev3", 00:21:40.514 "aliases": [ 00:21:40.514 "bd52d21f-1356-11ef-8e8f-9dd684e56d79" 00:21:40.514 ], 00:21:40.514 "product_name": "Malloc disk", 00:21:40.514 "block_size": 512, 00:21:40.514 "num_blocks": 65536, 00:21:40.514 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:40.514 "assigned_rate_limits": { 00:21:40.514 "rw_ios_per_sec": 0, 00:21:40.514 "rw_mbytes_per_sec": 0, 00:21:40.514 "r_mbytes_per_sec": 0, 00:21:40.514 "w_mbytes_per_sec": 0 00:21:40.514 }, 00:21:40.514 "claimed": false, 00:21:40.514 "zoned": false, 00:21:40.514 "supported_io_types": { 00:21:40.514 "read": true, 00:21:40.514 "write": true, 00:21:40.514 "unmap": true, 00:21:40.514 "write_zeroes": true, 00:21:40.514 "flush": true, 00:21:40.514 "reset": true, 00:21:40.514 "compare": false, 00:21:40.514 "compare_and_write": false, 00:21:40.514 "abort": true, 00:21:40.514 "nvme_admin": false, 00:21:40.514 "nvme_io": false 00:21:40.514 }, 00:21:40.514 "memory_domains": [ 00:21:40.514 { 00:21:40.514 "dma_device_id": "system", 00:21:40.514 "dma_device_type": 1 00:21:40.514 }, 00:21:40.514 { 00:21:40.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.514 "dma_device_type": 2 00:21:40.514 } 00:21:40.514 ], 00:21:40.514 "driver_specific": {} 00:21:40.514 } 00:21:40.514 ] 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:40.772 [2024-05-16 07:34:34.289160] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:40.772 [2024-05-16 07:34:34.289215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:40.772 [2024-05-16 07:34:34.289224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:40.772 [2024-05-16 07:34:34.289669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.772 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.338 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.338 "name": "Existed_Raid", 00:21:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.338 "strip_size_kb": 64, 00:21:41.338 "state": "configuring", 00:21:41.338 "raid_level": "concat", 00:21:41.338 "superblock": false, 00:21:41.338 "num_base_bdevs": 3, 00:21:41.338 "num_base_bdevs_discovered": 2, 00:21:41.338 "num_base_bdevs_operational": 3, 00:21:41.338 "base_bdevs_list": [ 00:21:41.338 { 00:21:41.338 "name": "BaseBdev1", 00:21:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.338 "is_configured": false, 00:21:41.338 "data_offset": 0, 00:21:41.338 "data_size": 0 00:21:41.338 }, 00:21:41.338 { 00:21:41.338 "name": "BaseBdev2", 00:21:41.338 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:41.338 "is_configured": true, 00:21:41.338 "data_offset": 0, 00:21:41.338 "data_size": 65536 00:21:41.338 }, 00:21:41.338 { 00:21:41.338 "name": "BaseBdev3", 00:21:41.338 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:41.338 "is_configured": true, 00:21:41.338 "data_offset": 0, 00:21:41.338 "data_size": 65536 00:21:41.338 } 00:21:41.338 ] 00:21:41.338 }' 00:21:41.338 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.338 07:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.338 07:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:41.595 [2024-05-16 07:34:35.085149] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:41.595 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.596 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.596 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.596 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.596 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.596 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.853 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.853 "name": "Existed_Raid", 00:21:41.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.853 "strip_size_kb": 64, 00:21:41.853 "state": "configuring", 00:21:41.853 "raid_level": "concat", 00:21:41.853 "superblock": false, 00:21:41.853 "num_base_bdevs": 3, 00:21:41.853 "num_base_bdevs_discovered": 1, 00:21:41.853 "num_base_bdevs_operational": 3, 00:21:41.853 "base_bdevs_list": [ 00:21:41.853 { 00:21:41.853 "name": "BaseBdev1", 00:21:41.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.853 "is_configured": false, 00:21:41.853 "data_offset": 0, 00:21:41.853 "data_size": 0 00:21:41.853 }, 00:21:41.853 { 00:21:41.853 "name": null, 00:21:41.853 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:41.853 "is_configured": false, 00:21:41.853 "data_offset": 0, 00:21:41.853 "data_size": 65536 00:21:41.853 }, 00:21:41.853 { 00:21:41.853 "name": "BaseBdev3", 00:21:41.853 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:41.853 "is_configured": true, 00:21:41.853 "data_offset": 0, 00:21:41.853 "data_size": 65536 00:21:41.853 } 00:21:41.853 ] 00:21:41.853 }' 00:21:41.853 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.853 07:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.420 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.420 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:42.678 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:21:42.678 07:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.678 [2024-05-16 07:34:36.225283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.678 BaseBdev1 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.936 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:43.195 [ 00:21:43.195 { 00:21:43.195 "name": "BaseBdev1", 00:21:43.195 "aliases": [ 00:21:43.195 "bee02a05-1356-11ef-8e8f-9dd684e56d79" 00:21:43.195 ], 00:21:43.195 "product_name": "Malloc disk", 00:21:43.195 "block_size": 512, 00:21:43.195 "num_blocks": 65536, 00:21:43.195 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:43.195 "assigned_rate_limits": { 00:21:43.195 "rw_ios_per_sec": 0, 00:21:43.195 "rw_mbytes_per_sec": 0, 00:21:43.195 "r_mbytes_per_sec": 0, 00:21:43.195 "w_mbytes_per_sec": 0 00:21:43.195 }, 00:21:43.195 "claimed": true, 00:21:43.195 "claim_type": "exclusive_write", 00:21:43.195 "zoned": false, 00:21:43.195 "supported_io_types": { 00:21:43.195 "read": true, 00:21:43.195 "write": true, 00:21:43.195 "unmap": true, 00:21:43.195 "write_zeroes": true, 00:21:43.195 "flush": true, 00:21:43.195 "reset": true, 00:21:43.195 "compare": false, 00:21:43.195 "compare_and_write": false, 00:21:43.195 "abort": true, 00:21:43.195 "nvme_admin": false, 00:21:43.195 "nvme_io": false 00:21:43.195 }, 00:21:43.195 "memory_domains": [ 00:21:43.195 { 00:21:43.195 "dma_device_id": "system", 00:21:43.195 "dma_device_type": 1 00:21:43.195 }, 00:21:43.195 { 00:21:43.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.195 "dma_device_type": 2 00:21:43.195 } 00:21:43.195 ], 00:21:43.195 "driver_specific": {} 00:21:43.195 } 00:21:43.195 ] 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.195 07:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.454 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.454 "name": "Existed_Raid", 00:21:43.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.454 "strip_size_kb": 64, 00:21:43.454 "state": "configuring", 00:21:43.454 "raid_level": "concat", 00:21:43.454 "superblock": false, 00:21:43.454 "num_base_bdevs": 3, 00:21:43.454 "num_base_bdevs_discovered": 2, 00:21:43.454 "num_base_bdevs_operational": 3, 00:21:43.454 "base_bdevs_list": [ 00:21:43.454 { 00:21:43.454 "name": "BaseBdev1", 00:21:43.454 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:43.454 "is_configured": true, 00:21:43.454 "data_offset": 0, 00:21:43.454 "data_size": 65536 00:21:43.454 }, 00:21:43.454 { 00:21:43.454 "name": null, 00:21:43.454 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:43.454 "is_configured": false, 00:21:43.454 "data_offset": 0, 00:21:43.454 "data_size": 65536 00:21:43.454 }, 00:21:43.454 { 00:21:43.454 "name": "BaseBdev3", 00:21:43.454 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:43.454 "is_configured": true, 00:21:43.454 "data_offset": 0, 00:21:43.454 "data_size": 65536 00:21:43.454 } 00:21:43.454 ] 00:21:43.454 }' 00:21:43.454 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.454 07:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.020 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:44.020 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.279 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:44.279 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:44.549 [2024-05-16 07:34:37.841192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.549 07:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.811 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.811 "name": "Existed_Raid", 00:21:44.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.811 "strip_size_kb": 64, 00:21:44.811 "state": "configuring", 00:21:44.811 "raid_level": "concat", 00:21:44.811 "superblock": false, 00:21:44.811 "num_base_bdevs": 3, 00:21:44.811 "num_base_bdevs_discovered": 1, 00:21:44.811 "num_base_bdevs_operational": 3, 00:21:44.811 "base_bdevs_list": [ 00:21:44.811 { 00:21:44.811 "name": "BaseBdev1", 00:21:44.811 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:44.811 "is_configured": true, 00:21:44.811 "data_offset": 0, 00:21:44.811 "data_size": 65536 00:21:44.811 }, 00:21:44.811 { 00:21:44.811 "name": null, 00:21:44.811 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:44.811 "is_configured": false, 00:21:44.811 "data_offset": 0, 00:21:44.811 "data_size": 65536 00:21:44.811 }, 00:21:44.811 { 00:21:44.811 "name": null, 00:21:44.811 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:44.811 "is_configured": false, 00:21:44.811 "data_offset": 0, 00:21:44.811 "data_size": 65536 00:21:44.811 } 00:21:44.811 ] 00:21:44.811 }' 00:21:44.811 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.811 07:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.083 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.083 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:45.376 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:21:45.376 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:45.376 [2024-05-16 07:34:38.913219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.642 07:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.906 07:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.906 "name": "Existed_Raid", 00:21:45.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.906 "strip_size_kb": 64, 00:21:45.907 "state": "configuring", 00:21:45.907 "raid_level": "concat", 00:21:45.907 "superblock": false, 00:21:45.907 "num_base_bdevs": 3, 00:21:45.907 "num_base_bdevs_discovered": 2, 00:21:45.907 "num_base_bdevs_operational": 3, 00:21:45.907 "base_bdevs_list": [ 00:21:45.907 { 00:21:45.907 "name": "BaseBdev1", 00:21:45.907 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:45.907 "is_configured": true, 00:21:45.907 "data_offset": 0, 00:21:45.907 "data_size": 65536 00:21:45.907 }, 00:21:45.907 { 00:21:45.907 "name": null, 00:21:45.907 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:45.907 "is_configured": false, 00:21:45.907 "data_offset": 0, 00:21:45.907 "data_size": 65536 00:21:45.907 }, 00:21:45.907 { 00:21:45.907 "name": "BaseBdev3", 00:21:45.907 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:45.907 "is_configured": true, 00:21:45.907 "data_offset": 0, 00:21:45.907 "data_size": 65536 00:21:45.907 } 00:21:45.907 ] 00:21:45.907 }' 00:21:45.907 07:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.907 07:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.172 07:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.172 07:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:46.438 07:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:21:46.438 07:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:46.702 [2024-05-16 07:34:40.093220] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.702 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.969 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.969 "name": "Existed_Raid", 00:21:46.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.969 "strip_size_kb": 64, 00:21:46.969 "state": "configuring", 00:21:46.969 "raid_level": "concat", 00:21:46.969 "superblock": false, 00:21:46.969 "num_base_bdevs": 3, 00:21:46.969 "num_base_bdevs_discovered": 1, 00:21:46.969 "num_base_bdevs_operational": 3, 00:21:46.969 "base_bdevs_list": [ 00:21:46.969 { 00:21:46.969 "name": null, 00:21:46.969 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:46.969 "is_configured": false, 00:21:46.969 "data_offset": 0, 00:21:46.969 "data_size": 65536 00:21:46.969 }, 00:21:46.969 { 00:21:46.969 "name": null, 00:21:46.969 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:46.969 "is_configured": false, 00:21:46.969 "data_offset": 0, 00:21:46.969 "data_size": 65536 00:21:46.969 }, 00:21:46.969 { 00:21:46.969 "name": "BaseBdev3", 00:21:46.969 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:46.969 "is_configured": true, 00:21:46.969 "data_offset": 0, 00:21:46.969 "data_size": 65536 00:21:46.969 } 00:21:46.969 ] 00:21:46.969 }' 00:21:46.969 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.969 07:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.239 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.239 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:47.501 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:21:47.501 07:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:47.759 [2024-05-16 07:34:41.201969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.759 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.016 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.016 "name": "Existed_Raid", 00:21:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.016 "strip_size_kb": 64, 00:21:48.016 "state": "configuring", 00:21:48.016 "raid_level": "concat", 00:21:48.016 "superblock": false, 00:21:48.016 "num_base_bdevs": 3, 00:21:48.016 "num_base_bdevs_discovered": 2, 00:21:48.016 "num_base_bdevs_operational": 3, 00:21:48.016 "base_bdevs_list": [ 00:21:48.016 { 00:21:48.016 "name": null, 00:21:48.016 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:48.016 "is_configured": false, 00:21:48.016 "data_offset": 0, 00:21:48.016 "data_size": 65536 00:21:48.016 }, 00:21:48.016 { 00:21:48.016 "name": "BaseBdev2", 00:21:48.016 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:48.016 "is_configured": true, 00:21:48.016 "data_offset": 0, 00:21:48.016 "data_size": 65536 00:21:48.016 }, 00:21:48.016 { 00:21:48.016 "name": "BaseBdev3", 00:21:48.016 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:48.016 "is_configured": true, 00:21:48.016 "data_offset": 0, 00:21:48.016 "data_size": 65536 00:21:48.016 } 00:21:48.016 ] 00:21:48.016 }' 00:21:48.016 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.016 07:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.275 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.275 07:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:48.844 07:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:21:48.844 07:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.844 07:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:49.101 07:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bee02a05-1356-11ef-8e8f-9dd684e56d79 00:21:49.101 [2024-05-16 07:34:42.654082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:49.101 [2024-05-16 07:34:42.654109] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a151a00 00:21:49.101 [2024-05-16 07:34:42.654113] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:49.101 [2024-05-16 07:34:42.654135] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a1b4e20 00:21:49.101 [2024-05-16 07:34:42.654191] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a151a00 00:21:49.102 [2024-05-16 07:34:42.654195] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a151a00 00:21:49.102 [2024-05-16 07:34:42.654224] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.360 NewBaseBdev 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.360 07:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:49.926 [ 00:21:49.926 { 00:21:49.926 "name": "NewBaseBdev", 00:21:49.926 "aliases": [ 00:21:49.926 "bee02a05-1356-11ef-8e8f-9dd684e56d79" 00:21:49.926 ], 00:21:49.926 "product_name": "Malloc disk", 00:21:49.926 "block_size": 512, 00:21:49.926 "num_blocks": 65536, 00:21:49.926 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:49.926 "assigned_rate_limits": { 00:21:49.926 "rw_ios_per_sec": 0, 00:21:49.926 "rw_mbytes_per_sec": 0, 00:21:49.926 "r_mbytes_per_sec": 0, 00:21:49.926 "w_mbytes_per_sec": 0 00:21:49.926 }, 00:21:49.926 "claimed": true, 00:21:49.926 "claim_type": "exclusive_write", 00:21:49.926 "zoned": false, 00:21:49.926 "supported_io_types": { 00:21:49.926 "read": true, 00:21:49.926 "write": true, 00:21:49.926 "unmap": true, 00:21:49.926 "write_zeroes": true, 00:21:49.926 "flush": true, 00:21:49.926 "reset": true, 00:21:49.926 "compare": false, 00:21:49.926 "compare_and_write": false, 00:21:49.926 "abort": true, 00:21:49.926 "nvme_admin": false, 00:21:49.926 "nvme_io": false 00:21:49.926 }, 00:21:49.926 "memory_domains": [ 00:21:49.926 { 00:21:49.926 "dma_device_id": "system", 00:21:49.926 "dma_device_type": 1 00:21:49.926 }, 00:21:49.926 { 00:21:49.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.926 "dma_device_type": 2 00:21:49.926 } 00:21:49.926 ], 00:21:49.926 "driver_specific": {} 00:21:49.926 } 00:21:49.926 ] 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.926 "name": "Existed_Raid", 00:21:49.926 "uuid": "c2b5248c-1356-11ef-8e8f-9dd684e56d79", 00:21:49.926 "strip_size_kb": 64, 00:21:49.926 "state": "online", 00:21:49.926 "raid_level": "concat", 00:21:49.926 "superblock": false, 00:21:49.926 "num_base_bdevs": 3, 00:21:49.926 "num_base_bdevs_discovered": 3, 00:21:49.926 "num_base_bdevs_operational": 3, 00:21:49.926 "base_bdevs_list": [ 00:21:49.926 { 00:21:49.926 "name": "NewBaseBdev", 00:21:49.926 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:49.926 "is_configured": true, 00:21:49.926 "data_offset": 0, 00:21:49.926 "data_size": 65536 00:21:49.926 }, 00:21:49.926 { 00:21:49.926 "name": "BaseBdev2", 00:21:49.926 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:49.926 "is_configured": true, 00:21:49.926 "data_offset": 0, 00:21:49.926 "data_size": 65536 00:21:49.926 }, 00:21:49.926 { 00:21:49.926 "name": "BaseBdev3", 00:21:49.926 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:49.926 "is_configured": true, 00:21:49.926 "data_offset": 0, 00:21:49.926 "data_size": 65536 00:21:49.926 } 00:21:49.926 ] 00:21:49.926 }' 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.926 07:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:50.494 07:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:21:50.494 [2024-05-16 07:34:44.030009] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.494 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:21:50.494 "name": "Existed_Raid", 00:21:50.494 "aliases": [ 00:21:50.494 "c2b5248c-1356-11ef-8e8f-9dd684e56d79" 00:21:50.494 ], 00:21:50.494 "product_name": "Raid Volume", 00:21:50.494 "block_size": 512, 00:21:50.494 "num_blocks": 196608, 00:21:50.494 "uuid": "c2b5248c-1356-11ef-8e8f-9dd684e56d79", 00:21:50.494 "assigned_rate_limits": { 00:21:50.494 "rw_ios_per_sec": 0, 00:21:50.494 "rw_mbytes_per_sec": 0, 00:21:50.494 "r_mbytes_per_sec": 0, 00:21:50.494 "w_mbytes_per_sec": 0 00:21:50.494 }, 00:21:50.494 "claimed": false, 00:21:50.494 "zoned": false, 00:21:50.494 "supported_io_types": { 00:21:50.494 "read": true, 00:21:50.494 "write": true, 00:21:50.494 "unmap": true, 00:21:50.494 "write_zeroes": true, 00:21:50.494 "flush": true, 00:21:50.494 "reset": true, 00:21:50.494 "compare": false, 00:21:50.494 "compare_and_write": false, 00:21:50.494 "abort": false, 00:21:50.494 "nvme_admin": false, 00:21:50.494 "nvme_io": false 00:21:50.494 }, 00:21:50.494 "memory_domains": [ 00:21:50.494 { 00:21:50.494 "dma_device_id": "system", 00:21:50.494 "dma_device_type": 1 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.494 "dma_device_type": 2 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "dma_device_id": "system", 00:21:50.494 "dma_device_type": 1 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.494 "dma_device_type": 2 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "dma_device_id": "system", 00:21:50.494 "dma_device_type": 1 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.494 "dma_device_type": 2 00:21:50.494 } 00:21:50.494 ], 00:21:50.494 "driver_specific": { 00:21:50.494 "raid": { 00:21:50.494 "uuid": "c2b5248c-1356-11ef-8e8f-9dd684e56d79", 00:21:50.494 "strip_size_kb": 64, 00:21:50.494 "state": "online", 00:21:50.494 "raid_level": "concat", 00:21:50.494 "superblock": false, 00:21:50.494 "num_base_bdevs": 3, 00:21:50.494 "num_base_bdevs_discovered": 3, 00:21:50.494 "num_base_bdevs_operational": 3, 00:21:50.494 "base_bdevs_list": [ 00:21:50.494 { 00:21:50.494 "name": "NewBaseBdev", 00:21:50.494 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:50.494 "is_configured": true, 00:21:50.494 "data_offset": 0, 00:21:50.494 "data_size": 65536 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "name": "BaseBdev2", 00:21:50.494 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:50.494 "is_configured": true, 00:21:50.494 "data_offset": 0, 00:21:50.494 "data_size": 65536 00:21:50.494 }, 00:21:50.494 { 00:21:50.494 "name": "BaseBdev3", 00:21:50.494 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:50.494 "is_configured": true, 00:21:50.494 "data_offset": 0, 00:21:50.494 "data_size": 65536 00:21:50.494 } 00:21:50.494 ] 00:21:50.494 } 00:21:50.494 } 00:21:50.494 }' 00:21:50.494 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.753 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:21:50.753 BaseBdev2 00:21:50.753 BaseBdev3' 00:21:50.753 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:50.753 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:50.753 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:51.011 "name": "NewBaseBdev", 00:21:51.011 "aliases": [ 00:21:51.011 "bee02a05-1356-11ef-8e8f-9dd684e56d79" 00:21:51.011 ], 00:21:51.011 "product_name": "Malloc disk", 00:21:51.011 "block_size": 512, 00:21:51.011 "num_blocks": 65536, 00:21:51.011 "uuid": "bee02a05-1356-11ef-8e8f-9dd684e56d79", 00:21:51.011 "assigned_rate_limits": { 00:21:51.011 "rw_ios_per_sec": 0, 00:21:51.011 "rw_mbytes_per_sec": 0, 00:21:51.011 "r_mbytes_per_sec": 0, 00:21:51.011 "w_mbytes_per_sec": 0 00:21:51.011 }, 00:21:51.011 "claimed": true, 00:21:51.011 "claim_type": "exclusive_write", 00:21:51.011 "zoned": false, 00:21:51.011 "supported_io_types": { 00:21:51.011 "read": true, 00:21:51.011 "write": true, 00:21:51.011 "unmap": true, 00:21:51.011 "write_zeroes": true, 00:21:51.011 "flush": true, 00:21:51.011 "reset": true, 00:21:51.011 "compare": false, 00:21:51.011 "compare_and_write": false, 00:21:51.011 "abort": true, 00:21:51.011 "nvme_admin": false, 00:21:51.011 "nvme_io": false 00:21:51.011 }, 00:21:51.011 "memory_domains": [ 00:21:51.011 { 00:21:51.011 "dma_device_id": "system", 00:21:51.011 "dma_device_type": 1 00:21:51.011 }, 00:21:51.011 { 00:21:51.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.011 "dma_device_type": 2 00:21:51.011 } 00:21:51.011 ], 00:21:51.011 "driver_specific": {} 00:21:51.011 }' 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:51.011 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:51.271 "name": "BaseBdev2", 00:21:51.271 "aliases": [ 00:21:51.271 "bce89ee8-1356-11ef-8e8f-9dd684e56d79" 00:21:51.271 ], 00:21:51.271 "product_name": "Malloc disk", 00:21:51.271 "block_size": 512, 00:21:51.271 "num_blocks": 65536, 00:21:51.271 "uuid": "bce89ee8-1356-11ef-8e8f-9dd684e56d79", 00:21:51.271 "assigned_rate_limits": { 00:21:51.271 "rw_ios_per_sec": 0, 00:21:51.271 "rw_mbytes_per_sec": 0, 00:21:51.271 "r_mbytes_per_sec": 0, 00:21:51.271 "w_mbytes_per_sec": 0 00:21:51.271 }, 00:21:51.271 "claimed": true, 00:21:51.271 "claim_type": "exclusive_write", 00:21:51.271 "zoned": false, 00:21:51.271 "supported_io_types": { 00:21:51.271 "read": true, 00:21:51.271 "write": true, 00:21:51.271 "unmap": true, 00:21:51.271 "write_zeroes": true, 00:21:51.271 "flush": true, 00:21:51.271 "reset": true, 00:21:51.271 "compare": false, 00:21:51.271 "compare_and_write": false, 00:21:51.271 "abort": true, 00:21:51.271 "nvme_admin": false, 00:21:51.271 "nvme_io": false 00:21:51.271 }, 00:21:51.271 "memory_domains": [ 00:21:51.271 { 00:21:51.271 "dma_device_id": "system", 00:21:51.271 "dma_device_type": 1 00:21:51.271 }, 00:21:51.271 { 00:21:51.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.271 "dma_device_type": 2 00:21:51.271 } 00:21:51.271 ], 00:21:51.271 "driver_specific": {} 00:21:51.271 }' 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:51.271 07:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:21:51.871 "name": "BaseBdev3", 00:21:51.871 "aliases": [ 00:21:51.871 "bd52d21f-1356-11ef-8e8f-9dd684e56d79" 00:21:51.871 ], 00:21:51.871 "product_name": "Malloc disk", 00:21:51.871 "block_size": 512, 00:21:51.871 "num_blocks": 65536, 00:21:51.871 "uuid": "bd52d21f-1356-11ef-8e8f-9dd684e56d79", 00:21:51.871 "assigned_rate_limits": { 00:21:51.871 "rw_ios_per_sec": 0, 00:21:51.871 "rw_mbytes_per_sec": 0, 00:21:51.871 "r_mbytes_per_sec": 0, 00:21:51.871 "w_mbytes_per_sec": 0 00:21:51.871 }, 00:21:51.871 "claimed": true, 00:21:51.871 "claim_type": "exclusive_write", 00:21:51.871 "zoned": false, 00:21:51.871 "supported_io_types": { 00:21:51.871 "read": true, 00:21:51.871 "write": true, 00:21:51.871 "unmap": true, 00:21:51.871 "write_zeroes": true, 00:21:51.871 "flush": true, 00:21:51.871 "reset": true, 00:21:51.871 "compare": false, 00:21:51.871 "compare_and_write": false, 00:21:51.871 "abort": true, 00:21:51.871 "nvme_admin": false, 00:21:51.871 "nvme_io": false 00:21:51.871 }, 00:21:51.871 "memory_domains": [ 00:21:51.871 { 00:21:51.871 "dma_device_id": "system", 00:21:51.871 "dma_device_type": 1 00:21:51.871 }, 00:21:51.871 { 00:21:51.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.871 "dma_device_type": 2 00:21:51.871 } 00:21:51.871 ], 00:21:51.871 "driver_specific": {} 00:21:51.871 }' 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:51.871 [2024-05-16 07:34:45.377974] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:51.871 [2024-05-16 07:34:45.378005] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.871 [2024-05-16 07:34:45.378033] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.871 [2024-05-16 07:34:45.378046] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.871 [2024-05-16 07:34:45.378050] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a151a00 name Existed_Raid, state offline 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 53851 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 53851 ']' 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 53851 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 53851 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:21:51.871 killing process with pid 53851 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53851' 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 53851 00:21:51.871 [2024-05-16 07:34:45.407362] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:51.871 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 53851 00:21:51.871 [2024-05-16 07:34:45.422369] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.138 07:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:21:52.138 00:21:52.138 real 0m24.015s 00:21:52.138 user 0m43.882s 00:21:52.138 sys 0m3.374s 00:21:52.138 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:52.138 ************************************ 00:21:52.138 END TEST raid_state_function_test 00:21:52.138 ************************************ 00:21:52.138 07:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.138 07:34:45 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:21:52.138 07:34:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:52.138 07:34:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:52.138 07:34:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.138 ************************************ 00:21:52.138 START TEST raid_state_function_test_sb 00:21:52.138 ************************************ 00:21:52.138 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:21:52.138 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:21:52.138 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:21:52.139 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=54576 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 54576' 00:21:52.140 Process raid pid: 54576 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 54576 /var/tmp/spdk-raid.sock 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 54576 ']' 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:52.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:52.140 07:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.140 [2024-05-16 07:34:45.648073] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:52.140 [2024-05-16 07:34:45.648275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:21:52.707 EAL: TSC is not safe to use in SMP mode 00:21:52.707 EAL: TSC is not invariant 00:21:52.707 [2024-05-16 07:34:46.113034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.707 [2024-05-16 07:34:46.211456] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:21:52.707 [2024-05-16 07:34:46.213666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.707 [2024-05-16 07:34:46.214495] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.707 [2024-05-16 07:34:46.214513] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.270 07:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:53.270 07:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:21:53.270 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:53.528 [2024-05-16 07:34:46.957589] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.528 [2024-05-16 07:34:46.957647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.528 [2024-05-16 07:34:46.957652] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.528 [2024-05-16 07:34:46.957661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.528 [2024-05-16 07:34:46.957664] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.528 [2024-05-16 07:34:46.957680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.528 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.786 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.786 "name": "Existed_Raid", 00:21:53.786 "uuid": "c545cc93-1356-11ef-8e8f-9dd684e56d79", 00:21:53.786 "strip_size_kb": 64, 00:21:53.786 "state": "configuring", 00:21:53.786 "raid_level": "concat", 00:21:53.786 "superblock": true, 00:21:53.786 "num_base_bdevs": 3, 00:21:53.786 "num_base_bdevs_discovered": 0, 00:21:53.786 "num_base_bdevs_operational": 3, 00:21:53.786 "base_bdevs_list": [ 00:21:53.786 { 00:21:53.786 "name": "BaseBdev1", 00:21:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.786 "is_configured": false, 00:21:53.786 "data_offset": 0, 00:21:53.786 "data_size": 0 00:21:53.786 }, 00:21:53.786 { 00:21:53.786 "name": "BaseBdev2", 00:21:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.786 "is_configured": false, 00:21:53.786 "data_offset": 0, 00:21:53.786 "data_size": 0 00:21:53.786 }, 00:21:53.786 { 00:21:53.786 "name": "BaseBdev3", 00:21:53.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.786 "is_configured": false, 00:21:53.786 "data_offset": 0, 00:21:53.786 "data_size": 0 00:21:53.786 } 00:21:53.786 ] 00:21:53.786 }' 00:21:53.786 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.786 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.351 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:54.351 [2024-05-16 07:34:47.881562] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.351 [2024-05-16 07:34:47.881594] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a53b500 name Existed_Raid, state configuring 00:21:54.351 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:54.915 [2024-05-16 07:34:48.165575] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:54.915 [2024-05-16 07:34:48.165634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:54.915 [2024-05-16 07:34:48.165647] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.915 [2024-05-16 07:34:48.165656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.915 [2024-05-16 07:34:48.165660] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:54.916 [2024-05-16 07:34:48.165668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:54.916 [2024-05-16 07:34:48.426544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.916 BaseBdev1 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:54.916 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:55.173 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:55.430 [ 00:21:55.430 { 00:21:55.430 "name": "BaseBdev1", 00:21:55.430 "aliases": [ 00:21:55.430 "c625cbf3-1356-11ef-8e8f-9dd684e56d79" 00:21:55.430 ], 00:21:55.430 "product_name": "Malloc disk", 00:21:55.430 "block_size": 512, 00:21:55.430 "num_blocks": 65536, 00:21:55.430 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:21:55.430 "assigned_rate_limits": { 00:21:55.430 "rw_ios_per_sec": 0, 00:21:55.430 "rw_mbytes_per_sec": 0, 00:21:55.430 "r_mbytes_per_sec": 0, 00:21:55.430 "w_mbytes_per_sec": 0 00:21:55.430 }, 00:21:55.430 "claimed": true, 00:21:55.430 "claim_type": "exclusive_write", 00:21:55.430 "zoned": false, 00:21:55.430 "supported_io_types": { 00:21:55.430 "read": true, 00:21:55.430 "write": true, 00:21:55.430 "unmap": true, 00:21:55.430 "write_zeroes": true, 00:21:55.430 "flush": true, 00:21:55.430 "reset": true, 00:21:55.430 "compare": false, 00:21:55.430 "compare_and_write": false, 00:21:55.430 "abort": true, 00:21:55.430 "nvme_admin": false, 00:21:55.430 "nvme_io": false 00:21:55.430 }, 00:21:55.430 "memory_domains": [ 00:21:55.430 { 00:21:55.430 "dma_device_id": "system", 00:21:55.430 "dma_device_type": 1 00:21:55.430 }, 00:21:55.430 { 00:21:55.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.430 "dma_device_type": 2 00:21:55.430 } 00:21:55.430 ], 00:21:55.430 "driver_specific": {} 00:21:55.430 } 00:21:55.430 ] 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.688 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.945 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.945 "name": "Existed_Raid", 00:21:55.945 "uuid": "c5fe1f88-1356-11ef-8e8f-9dd684e56d79", 00:21:55.945 "strip_size_kb": 64, 00:21:55.945 "state": "configuring", 00:21:55.945 "raid_level": "concat", 00:21:55.945 "superblock": true, 00:21:55.945 "num_base_bdevs": 3, 00:21:55.945 "num_base_bdevs_discovered": 1, 00:21:55.945 "num_base_bdevs_operational": 3, 00:21:55.945 "base_bdevs_list": [ 00:21:55.945 { 00:21:55.945 "name": "BaseBdev1", 00:21:55.945 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:21:55.945 "is_configured": true, 00:21:55.945 "data_offset": 2048, 00:21:55.945 "data_size": 63488 00:21:55.945 }, 00:21:55.945 { 00:21:55.945 "name": "BaseBdev2", 00:21:55.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.945 "is_configured": false, 00:21:55.945 "data_offset": 0, 00:21:55.945 "data_size": 0 00:21:55.945 }, 00:21:55.945 { 00:21:55.945 "name": "BaseBdev3", 00:21:55.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.945 "is_configured": false, 00:21:55.945 "data_offset": 0, 00:21:55.945 "data_size": 0 00:21:55.945 } 00:21:55.945 ] 00:21:55.945 }' 00:21:55.945 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.945 07:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.202 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:56.459 [2024-05-16 07:34:49.953571] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:56.459 [2024-05-16 07:34:49.953605] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a53b500 name Existed_Raid, state configuring 00:21:56.459 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:57.024 [2024-05-16 07:34:50.289598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.024 [2024-05-16 07:34:50.290331] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:57.024 [2024-05-16 07:34:50.290377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:57.024 [2024-05-16 07:34:50.290382] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:57.024 [2024-05-16 07:34:50.290390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.024 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.025 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.283 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.283 "name": "Existed_Raid", 00:21:57.283 "uuid": "c74238f0-1356-11ef-8e8f-9dd684e56d79", 00:21:57.283 "strip_size_kb": 64, 00:21:57.283 "state": "configuring", 00:21:57.283 "raid_level": "concat", 00:21:57.283 "superblock": true, 00:21:57.283 "num_base_bdevs": 3, 00:21:57.283 "num_base_bdevs_discovered": 1, 00:21:57.283 "num_base_bdevs_operational": 3, 00:21:57.283 "base_bdevs_list": [ 00:21:57.283 { 00:21:57.283 "name": "BaseBdev1", 00:21:57.283 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:21:57.283 "is_configured": true, 00:21:57.283 "data_offset": 2048, 00:21:57.283 "data_size": 63488 00:21:57.283 }, 00:21:57.283 { 00:21:57.283 "name": "BaseBdev2", 00:21:57.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.283 "is_configured": false, 00:21:57.283 "data_offset": 0, 00:21:57.283 "data_size": 0 00:21:57.283 }, 00:21:57.283 { 00:21:57.283 "name": "BaseBdev3", 00:21:57.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.283 "is_configured": false, 00:21:57.283 "data_offset": 0, 00:21:57.283 "data_size": 0 00:21:57.283 } 00:21:57.283 ] 00:21:57.283 }' 00:21:57.283 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.283 07:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.540 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:57.797 [2024-05-16 07:34:51.153699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.797 BaseBdev2 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:57.797 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:58.055 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:58.353 [ 00:21:58.353 { 00:21:58.353 "name": "BaseBdev2", 00:21:58.353 "aliases": [ 00:21:58.353 "c7c60ed2-1356-11ef-8e8f-9dd684e56d79" 00:21:58.353 ], 00:21:58.353 "product_name": "Malloc disk", 00:21:58.353 "block_size": 512, 00:21:58.353 "num_blocks": 65536, 00:21:58.353 "uuid": "c7c60ed2-1356-11ef-8e8f-9dd684e56d79", 00:21:58.353 "assigned_rate_limits": { 00:21:58.353 "rw_ios_per_sec": 0, 00:21:58.353 "rw_mbytes_per_sec": 0, 00:21:58.353 "r_mbytes_per_sec": 0, 00:21:58.353 "w_mbytes_per_sec": 0 00:21:58.353 }, 00:21:58.353 "claimed": true, 00:21:58.353 "claim_type": "exclusive_write", 00:21:58.353 "zoned": false, 00:21:58.353 "supported_io_types": { 00:21:58.353 "read": true, 00:21:58.353 "write": true, 00:21:58.353 "unmap": true, 00:21:58.353 "write_zeroes": true, 00:21:58.353 "flush": true, 00:21:58.353 "reset": true, 00:21:58.353 "compare": false, 00:21:58.353 "compare_and_write": false, 00:21:58.353 "abort": true, 00:21:58.353 "nvme_admin": false, 00:21:58.353 "nvme_io": false 00:21:58.353 }, 00:21:58.353 "memory_domains": [ 00:21:58.353 { 00:21:58.353 "dma_device_id": "system", 00:21:58.353 "dma_device_type": 1 00:21:58.353 }, 00:21:58.353 { 00:21:58.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.353 "dma_device_type": 2 00:21:58.353 } 00:21:58.353 ], 00:21:58.353 "driver_specific": {} 00:21:58.353 } 00:21:58.353 ] 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.353 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.632 07:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.632 "name": "Existed_Raid", 00:21:58.632 "uuid": "c74238f0-1356-11ef-8e8f-9dd684e56d79", 00:21:58.632 "strip_size_kb": 64, 00:21:58.632 "state": "configuring", 00:21:58.632 "raid_level": "concat", 00:21:58.632 "superblock": true, 00:21:58.632 "num_base_bdevs": 3, 00:21:58.632 "num_base_bdevs_discovered": 2, 00:21:58.632 "num_base_bdevs_operational": 3, 00:21:58.632 "base_bdevs_list": [ 00:21:58.632 { 00:21:58.632 "name": "BaseBdev1", 00:21:58.632 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:21:58.632 "is_configured": true, 00:21:58.632 "data_offset": 2048, 00:21:58.632 "data_size": 63488 00:21:58.632 }, 00:21:58.632 { 00:21:58.632 "name": "BaseBdev2", 00:21:58.632 "uuid": "c7c60ed2-1356-11ef-8e8f-9dd684e56d79", 00:21:58.632 "is_configured": true, 00:21:58.632 "data_offset": 2048, 00:21:58.632 "data_size": 63488 00:21:58.632 }, 00:21:58.632 { 00:21:58.632 "name": "BaseBdev3", 00:21:58.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.632 "is_configured": false, 00:21:58.632 "data_offset": 0, 00:21:58.632 "data_size": 0 00:21:58.632 } 00:21:58.632 ] 00:21:58.632 }' 00:21:58.632 07:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.632 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.889 07:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:59.147 [2024-05-16 07:34:52.573681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.147 [2024-05-16 07:34:52.573738] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a53ba00 00:21:59.147 [2024-05-16 07:34:52.573744] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:59.147 [2024-05-16 07:34:52.573762] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a59eec0 00:21:59.147 [2024-05-16 07:34:52.573799] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a53ba00 00:21:59.147 [2024-05-16 07:34:52.573803] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a53ba00 00:21:59.147 [2024-05-16 07:34:52.573820] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.147 BaseBdev3 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:59.147 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.404 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:59.662 [ 00:21:59.662 { 00:21:59.662 "name": "BaseBdev3", 00:21:59.662 "aliases": [ 00:21:59.662 "c89ebbc3-1356-11ef-8e8f-9dd684e56d79" 00:21:59.662 ], 00:21:59.662 "product_name": "Malloc disk", 00:21:59.662 "block_size": 512, 00:21:59.662 "num_blocks": 65536, 00:21:59.662 "uuid": "c89ebbc3-1356-11ef-8e8f-9dd684e56d79", 00:21:59.662 "assigned_rate_limits": { 00:21:59.662 "rw_ios_per_sec": 0, 00:21:59.662 "rw_mbytes_per_sec": 0, 00:21:59.662 "r_mbytes_per_sec": 0, 00:21:59.662 "w_mbytes_per_sec": 0 00:21:59.662 }, 00:21:59.662 "claimed": true, 00:21:59.662 "claim_type": "exclusive_write", 00:21:59.662 "zoned": false, 00:21:59.662 "supported_io_types": { 00:21:59.662 "read": true, 00:21:59.662 "write": true, 00:21:59.662 "unmap": true, 00:21:59.662 "write_zeroes": true, 00:21:59.662 "flush": true, 00:21:59.662 "reset": true, 00:21:59.662 "compare": false, 00:21:59.662 "compare_and_write": false, 00:21:59.662 "abort": true, 00:21:59.662 "nvme_admin": false, 00:21:59.662 "nvme_io": false 00:21:59.662 }, 00:21:59.662 "memory_domains": [ 00:21:59.662 { 00:21:59.662 "dma_device_id": "system", 00:21:59.662 "dma_device_type": 1 00:21:59.662 }, 00:21:59.662 { 00:21:59.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.662 "dma_device_type": 2 00:21:59.662 } 00:21:59.662 ], 00:21:59.662 "driver_specific": {} 00:21:59.662 } 00:21:59.662 ] 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.662 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.920 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.920 "name": "Existed_Raid", 00:21:59.920 "uuid": "c74238f0-1356-11ef-8e8f-9dd684e56d79", 00:21:59.920 "strip_size_kb": 64, 00:21:59.920 "state": "online", 00:21:59.920 "raid_level": "concat", 00:21:59.920 "superblock": true, 00:21:59.920 "num_base_bdevs": 3, 00:21:59.920 "num_base_bdevs_discovered": 3, 00:21:59.920 "num_base_bdevs_operational": 3, 00:21:59.920 "base_bdevs_list": [ 00:21:59.920 { 00:21:59.920 "name": "BaseBdev1", 00:21:59.920 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:21:59.920 "is_configured": true, 00:21:59.920 "data_offset": 2048, 00:21:59.920 "data_size": 63488 00:21:59.920 }, 00:21:59.920 { 00:21:59.920 "name": "BaseBdev2", 00:21:59.920 "uuid": "c7c60ed2-1356-11ef-8e8f-9dd684e56d79", 00:21:59.920 "is_configured": true, 00:21:59.920 "data_offset": 2048, 00:21:59.920 "data_size": 63488 00:21:59.920 }, 00:21:59.920 { 00:21:59.920 "name": "BaseBdev3", 00:21:59.920 "uuid": "c89ebbc3-1356-11ef-8e8f-9dd684e56d79", 00:21:59.920 "is_configured": true, 00:21:59.920 "data_offset": 2048, 00:21:59.920 "data_size": 63488 00:21:59.920 } 00:21:59.920 ] 00:21:59.920 }' 00:21:59.920 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.920 07:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:00.177 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:00.433 [2024-05-16 07:34:53.853658] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:00.433 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:00.433 "name": "Existed_Raid", 00:22:00.433 "aliases": [ 00:22:00.433 "c74238f0-1356-11ef-8e8f-9dd684e56d79" 00:22:00.433 ], 00:22:00.433 "product_name": "Raid Volume", 00:22:00.433 "block_size": 512, 00:22:00.433 "num_blocks": 190464, 00:22:00.433 "uuid": "c74238f0-1356-11ef-8e8f-9dd684e56d79", 00:22:00.433 "assigned_rate_limits": { 00:22:00.433 "rw_ios_per_sec": 0, 00:22:00.433 "rw_mbytes_per_sec": 0, 00:22:00.433 "r_mbytes_per_sec": 0, 00:22:00.433 "w_mbytes_per_sec": 0 00:22:00.433 }, 00:22:00.433 "claimed": false, 00:22:00.433 "zoned": false, 00:22:00.433 "supported_io_types": { 00:22:00.433 "read": true, 00:22:00.433 "write": true, 00:22:00.433 "unmap": true, 00:22:00.433 "write_zeroes": true, 00:22:00.433 "flush": true, 00:22:00.433 "reset": true, 00:22:00.433 "compare": false, 00:22:00.433 "compare_and_write": false, 00:22:00.433 "abort": false, 00:22:00.433 "nvme_admin": false, 00:22:00.433 "nvme_io": false 00:22:00.433 }, 00:22:00.433 "memory_domains": [ 00:22:00.433 { 00:22:00.433 "dma_device_id": "system", 00:22:00.433 "dma_device_type": 1 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.433 "dma_device_type": 2 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "dma_device_id": "system", 00:22:00.433 "dma_device_type": 1 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.433 "dma_device_type": 2 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "dma_device_id": "system", 00:22:00.433 "dma_device_type": 1 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.433 "dma_device_type": 2 00:22:00.433 } 00:22:00.433 ], 00:22:00.433 "driver_specific": { 00:22:00.433 "raid": { 00:22:00.433 "uuid": "c74238f0-1356-11ef-8e8f-9dd684e56d79", 00:22:00.433 "strip_size_kb": 64, 00:22:00.433 "state": "online", 00:22:00.433 "raid_level": "concat", 00:22:00.433 "superblock": true, 00:22:00.433 "num_base_bdevs": 3, 00:22:00.433 "num_base_bdevs_discovered": 3, 00:22:00.433 "num_base_bdevs_operational": 3, 00:22:00.433 "base_bdevs_list": [ 00:22:00.433 { 00:22:00.433 "name": "BaseBdev1", 00:22:00.433 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:22:00.433 "is_configured": true, 00:22:00.433 "data_offset": 2048, 00:22:00.433 "data_size": 63488 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "name": "BaseBdev2", 00:22:00.433 "uuid": "c7c60ed2-1356-11ef-8e8f-9dd684e56d79", 00:22:00.433 "is_configured": true, 00:22:00.433 "data_offset": 2048, 00:22:00.433 "data_size": 63488 00:22:00.433 }, 00:22:00.433 { 00:22:00.433 "name": "BaseBdev3", 00:22:00.433 "uuid": "c89ebbc3-1356-11ef-8e8f-9dd684e56d79", 00:22:00.433 "is_configured": true, 00:22:00.433 "data_offset": 2048, 00:22:00.433 "data_size": 63488 00:22:00.433 } 00:22:00.433 ] 00:22:00.433 } 00:22:00.433 } 00:22:00.433 }' 00:22:00.433 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:00.433 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:22:00.433 BaseBdev2 00:22:00.433 BaseBdev3' 00:22:00.433 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:00.433 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:00.433 07:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:00.689 "name": "BaseBdev1", 00:22:00.689 "aliases": [ 00:22:00.689 "c625cbf3-1356-11ef-8e8f-9dd684e56d79" 00:22:00.689 ], 00:22:00.689 "product_name": "Malloc disk", 00:22:00.689 "block_size": 512, 00:22:00.689 "num_blocks": 65536, 00:22:00.689 "uuid": "c625cbf3-1356-11ef-8e8f-9dd684e56d79", 00:22:00.689 "assigned_rate_limits": { 00:22:00.689 "rw_ios_per_sec": 0, 00:22:00.689 "rw_mbytes_per_sec": 0, 00:22:00.689 "r_mbytes_per_sec": 0, 00:22:00.689 "w_mbytes_per_sec": 0 00:22:00.689 }, 00:22:00.689 "claimed": true, 00:22:00.689 "claim_type": "exclusive_write", 00:22:00.689 "zoned": false, 00:22:00.689 "supported_io_types": { 00:22:00.689 "read": true, 00:22:00.689 "write": true, 00:22:00.689 "unmap": true, 00:22:00.689 "write_zeroes": true, 00:22:00.689 "flush": true, 00:22:00.689 "reset": true, 00:22:00.689 "compare": false, 00:22:00.689 "compare_and_write": false, 00:22:00.689 "abort": true, 00:22:00.689 "nvme_admin": false, 00:22:00.689 "nvme_io": false 00:22:00.689 }, 00:22:00.689 "memory_domains": [ 00:22:00.689 { 00:22:00.689 "dma_device_id": "system", 00:22:00.689 "dma_device_type": 1 00:22:00.689 }, 00:22:00.689 { 00:22:00.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.689 "dma_device_type": 2 00:22:00.689 } 00:22:00.689 ], 00:22:00.689 "driver_specific": {} 00:22:00.689 }' 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:00.689 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:00.946 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:00.946 "name": "BaseBdev2", 00:22:00.946 "aliases": [ 00:22:00.946 "c7c60ed2-1356-11ef-8e8f-9dd684e56d79" 00:22:00.946 ], 00:22:00.946 "product_name": "Malloc disk", 00:22:00.946 "block_size": 512, 00:22:00.946 "num_blocks": 65536, 00:22:00.946 "uuid": "c7c60ed2-1356-11ef-8e8f-9dd684e56d79", 00:22:00.946 "assigned_rate_limits": { 00:22:00.946 "rw_ios_per_sec": 0, 00:22:00.946 "rw_mbytes_per_sec": 0, 00:22:00.946 "r_mbytes_per_sec": 0, 00:22:00.946 "w_mbytes_per_sec": 0 00:22:00.946 }, 00:22:00.946 "claimed": true, 00:22:00.946 "claim_type": "exclusive_write", 00:22:00.946 "zoned": false, 00:22:00.946 "supported_io_types": { 00:22:00.946 "read": true, 00:22:00.946 "write": true, 00:22:00.946 "unmap": true, 00:22:00.946 "write_zeroes": true, 00:22:00.946 "flush": true, 00:22:00.946 "reset": true, 00:22:00.946 "compare": false, 00:22:00.946 "compare_and_write": false, 00:22:00.946 "abort": true, 00:22:00.946 "nvme_admin": false, 00:22:00.946 "nvme_io": false 00:22:00.946 }, 00:22:00.946 "memory_domains": [ 00:22:00.946 { 00:22:00.946 "dma_device_id": "system", 00:22:00.946 "dma_device_type": 1 00:22:00.946 }, 00:22:00.946 { 00:22:00.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.947 "dma_device_type": 2 00:22:00.947 } 00:22:00.947 ], 00:22:00.947 "driver_specific": {} 00:22:00.947 }' 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:00.947 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:01.204 "name": "BaseBdev3", 00:22:01.204 "aliases": [ 00:22:01.204 "c89ebbc3-1356-11ef-8e8f-9dd684e56d79" 00:22:01.204 ], 00:22:01.204 "product_name": "Malloc disk", 00:22:01.204 "block_size": 512, 00:22:01.204 "num_blocks": 65536, 00:22:01.204 "uuid": "c89ebbc3-1356-11ef-8e8f-9dd684e56d79", 00:22:01.204 "assigned_rate_limits": { 00:22:01.204 "rw_ios_per_sec": 0, 00:22:01.204 "rw_mbytes_per_sec": 0, 00:22:01.204 "r_mbytes_per_sec": 0, 00:22:01.204 "w_mbytes_per_sec": 0 00:22:01.204 }, 00:22:01.204 "claimed": true, 00:22:01.204 "claim_type": "exclusive_write", 00:22:01.204 "zoned": false, 00:22:01.204 "supported_io_types": { 00:22:01.204 "read": true, 00:22:01.204 "write": true, 00:22:01.204 "unmap": true, 00:22:01.204 "write_zeroes": true, 00:22:01.204 "flush": true, 00:22:01.204 "reset": true, 00:22:01.204 "compare": false, 00:22:01.204 "compare_and_write": false, 00:22:01.204 "abort": true, 00:22:01.204 "nvme_admin": false, 00:22:01.204 "nvme_io": false 00:22:01.204 }, 00:22:01.204 "memory_domains": [ 00:22:01.204 { 00:22:01.204 "dma_device_id": "system", 00:22:01.204 "dma_device_type": 1 00:22:01.204 }, 00:22:01.204 { 00:22:01.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.204 "dma_device_type": 2 00:22:01.204 } 00:22:01.204 ], 00:22:01.204 "driver_specific": {} 00:22:01.204 }' 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:01.204 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:01.460 07:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:01.460 [2024-05-16 07:34:54.997620] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.461 [2024-05-16 07:34:54.997651] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:01.461 [2024-05-16 07:34:54.997667] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.717 "name": "Existed_Raid", 00:22:01.717 "uuid": "c74238f0-1356-11ef-8e8f-9dd684e56d79", 00:22:01.717 "strip_size_kb": 64, 00:22:01.717 "state": "offline", 00:22:01.717 "raid_level": "concat", 00:22:01.717 "superblock": true, 00:22:01.717 "num_base_bdevs": 3, 00:22:01.717 "num_base_bdevs_discovered": 2, 00:22:01.717 "num_base_bdevs_operational": 2, 00:22:01.717 "base_bdevs_list": [ 00:22:01.717 { 00:22:01.717 "name": null, 00:22:01.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.717 "is_configured": false, 00:22:01.717 "data_offset": 2048, 00:22:01.717 "data_size": 63488 00:22:01.717 }, 00:22:01.717 { 00:22:01.717 "name": "BaseBdev2", 00:22:01.717 "uuid": "c7c60ed2-1356-11ef-8e8f-9dd684e56d79", 00:22:01.717 "is_configured": true, 00:22:01.717 "data_offset": 2048, 00:22:01.717 "data_size": 63488 00:22:01.717 }, 00:22:01.717 { 00:22:01.717 "name": "BaseBdev3", 00:22:01.717 "uuid": "c89ebbc3-1356-11ef-8e8f-9dd684e56d79", 00:22:01.717 "is_configured": true, 00:22:01.717 "data_offset": 2048, 00:22:01.717 "data_size": 63488 00:22:01.717 } 00:22:01.717 ] 00:22:01.717 }' 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.717 07:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:02.281 07:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:02.539 [2024-05-16 07:34:55.994995] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:02.539 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:02.539 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:02.539 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.539 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:02.796 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:02.796 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:02.796 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:03.054 [2024-05-16 07:34:56.548083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:03.054 [2024-05-16 07:34:56.548118] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a53ba00 name Existed_Raid, state offline 00:22:03.054 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:03.054 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:03.054 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.054 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:22:03.311 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:22:03.311 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:22:03.311 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:22:03.311 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:22:03.311 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:22:03.311 07:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:03.571 BaseBdev2 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:03.571 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.138 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:04.138 [ 00:22:04.138 { 00:22:04.138 "name": "BaseBdev2", 00:22:04.138 "aliases": [ 00:22:04.138 "cb4c1044-1356-11ef-8e8f-9dd684e56d79" 00:22:04.138 ], 00:22:04.138 "product_name": "Malloc disk", 00:22:04.138 "block_size": 512, 00:22:04.138 "num_blocks": 65536, 00:22:04.138 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:04.138 "assigned_rate_limits": { 00:22:04.138 "rw_ios_per_sec": 0, 00:22:04.138 "rw_mbytes_per_sec": 0, 00:22:04.138 "r_mbytes_per_sec": 0, 00:22:04.138 "w_mbytes_per_sec": 0 00:22:04.138 }, 00:22:04.138 "claimed": false, 00:22:04.138 "zoned": false, 00:22:04.138 "supported_io_types": { 00:22:04.138 "read": true, 00:22:04.138 "write": true, 00:22:04.138 "unmap": true, 00:22:04.138 "write_zeroes": true, 00:22:04.138 "flush": true, 00:22:04.138 "reset": true, 00:22:04.138 "compare": false, 00:22:04.138 "compare_and_write": false, 00:22:04.138 "abort": true, 00:22:04.138 "nvme_admin": false, 00:22:04.138 "nvme_io": false 00:22:04.138 }, 00:22:04.138 "memory_domains": [ 00:22:04.138 { 00:22:04.138 "dma_device_id": "system", 00:22:04.138 "dma_device_type": 1 00:22:04.138 }, 00:22:04.138 { 00:22:04.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.138 "dma_device_type": 2 00:22:04.138 } 00:22:04.138 ], 00:22:04.138 "driver_specific": {} 00:22:04.138 } 00:22:04.138 ] 00:22:04.138 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:04.138 07:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:22:04.138 07:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:22:04.138 07:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:04.397 BaseBdev3 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:04.397 07:34:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.963 07:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:04.963 [ 00:22:04.963 { 00:22:04.963 "name": "BaseBdev3", 00:22:04.963 "aliases": [ 00:22:04.963 "cbccd855-1356-11ef-8e8f-9dd684e56d79" 00:22:04.963 ], 00:22:04.963 "product_name": "Malloc disk", 00:22:04.963 "block_size": 512, 00:22:04.963 "num_blocks": 65536, 00:22:04.963 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:04.963 "assigned_rate_limits": { 00:22:04.963 "rw_ios_per_sec": 0, 00:22:04.963 "rw_mbytes_per_sec": 0, 00:22:04.963 "r_mbytes_per_sec": 0, 00:22:04.963 "w_mbytes_per_sec": 0 00:22:04.963 }, 00:22:04.963 "claimed": false, 00:22:04.963 "zoned": false, 00:22:04.963 "supported_io_types": { 00:22:04.963 "read": true, 00:22:04.963 "write": true, 00:22:04.963 "unmap": true, 00:22:04.963 "write_zeroes": true, 00:22:04.963 "flush": true, 00:22:04.963 "reset": true, 00:22:04.963 "compare": false, 00:22:04.963 "compare_and_write": false, 00:22:04.963 "abort": true, 00:22:04.963 "nvme_admin": false, 00:22:04.963 "nvme_io": false 00:22:04.963 }, 00:22:04.963 "memory_domains": [ 00:22:04.963 { 00:22:04.963 "dma_device_id": "system", 00:22:04.963 "dma_device_type": 1 00:22:04.963 }, 00:22:04.963 { 00:22:04.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.963 "dma_device_type": 2 00:22:04.963 } 00:22:04.963 ], 00:22:04.963 "driver_specific": {} 00:22:04.963 } 00:22:04.963 ] 00:22:04.963 07:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:04.963 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:22:04.963 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:22:04.963 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:05.222 [2024-05-16 07:34:58.728960] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.222 [2024-05-16 07:34:58.729014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.222 [2024-05-16 07:34:58.729024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:05.222 [2024-05-16 07:34:58.729479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.222 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.481 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.481 "name": "Existed_Raid", 00:22:05.481 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:05.481 "strip_size_kb": 64, 00:22:05.481 "state": "configuring", 00:22:05.481 "raid_level": "concat", 00:22:05.481 "superblock": true, 00:22:05.481 "num_base_bdevs": 3, 00:22:05.481 "num_base_bdevs_discovered": 2, 00:22:05.481 "num_base_bdevs_operational": 3, 00:22:05.481 "base_bdevs_list": [ 00:22:05.481 { 00:22:05.481 "name": "BaseBdev1", 00:22:05.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.481 "is_configured": false, 00:22:05.481 "data_offset": 0, 00:22:05.481 "data_size": 0 00:22:05.481 }, 00:22:05.481 { 00:22:05.481 "name": "BaseBdev2", 00:22:05.481 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:05.481 "is_configured": true, 00:22:05.481 "data_offset": 2048, 00:22:05.481 "data_size": 63488 00:22:05.481 }, 00:22:05.481 { 00:22:05.481 "name": "BaseBdev3", 00:22:05.481 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:05.481 "is_configured": true, 00:22:05.481 "data_offset": 2048, 00:22:05.481 "data_size": 63488 00:22:05.481 } 00:22:05.481 ] 00:22:05.481 }' 00:22:05.481 07:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.481 07:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:06.049 [2024-05-16 07:34:59.544965] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.049 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.308 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.308 "name": "Existed_Raid", 00:22:06.308 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:06.308 "strip_size_kb": 64, 00:22:06.308 "state": "configuring", 00:22:06.308 "raid_level": "concat", 00:22:06.308 "superblock": true, 00:22:06.308 "num_base_bdevs": 3, 00:22:06.308 "num_base_bdevs_discovered": 1, 00:22:06.308 "num_base_bdevs_operational": 3, 00:22:06.308 "base_bdevs_list": [ 00:22:06.308 { 00:22:06.308 "name": "BaseBdev1", 00:22:06.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.308 "is_configured": false, 00:22:06.308 "data_offset": 0, 00:22:06.308 "data_size": 0 00:22:06.308 }, 00:22:06.308 { 00:22:06.308 "name": null, 00:22:06.308 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:06.308 "is_configured": false, 00:22:06.308 "data_offset": 2048, 00:22:06.308 "data_size": 63488 00:22:06.308 }, 00:22:06.308 { 00:22:06.308 "name": "BaseBdev3", 00:22:06.308 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:06.308 "is_configured": true, 00:22:06.308 "data_offset": 2048, 00:22:06.308 "data_size": 63488 00:22:06.308 } 00:22:06.308 ] 00:22:06.308 }' 00:22:06.308 07:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.308 07:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.875 07:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.875 07:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:06.875 07:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:22:06.875 07:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:07.133 [2024-05-16 07:35:00.621060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.133 BaseBdev1 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:07.133 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:07.392 07:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:07.651 [ 00:22:07.651 { 00:22:07.651 "name": "BaseBdev1", 00:22:07.651 "aliases": [ 00:22:07.651 "cd6aaa53-1356-11ef-8e8f-9dd684e56d79" 00:22:07.651 ], 00:22:07.651 "product_name": "Malloc disk", 00:22:07.651 "block_size": 512, 00:22:07.651 "num_blocks": 65536, 00:22:07.651 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:07.651 "assigned_rate_limits": { 00:22:07.651 "rw_ios_per_sec": 0, 00:22:07.651 "rw_mbytes_per_sec": 0, 00:22:07.651 "r_mbytes_per_sec": 0, 00:22:07.651 "w_mbytes_per_sec": 0 00:22:07.651 }, 00:22:07.651 "claimed": true, 00:22:07.651 "claim_type": "exclusive_write", 00:22:07.651 "zoned": false, 00:22:07.651 "supported_io_types": { 00:22:07.651 "read": true, 00:22:07.651 "write": true, 00:22:07.651 "unmap": true, 00:22:07.651 "write_zeroes": true, 00:22:07.651 "flush": true, 00:22:07.651 "reset": true, 00:22:07.651 "compare": false, 00:22:07.651 "compare_and_write": false, 00:22:07.651 "abort": true, 00:22:07.651 "nvme_admin": false, 00:22:07.651 "nvme_io": false 00:22:07.651 }, 00:22:07.651 "memory_domains": [ 00:22:07.651 { 00:22:07.651 "dma_device_id": "system", 00:22:07.651 "dma_device_type": 1 00:22:07.651 }, 00:22:07.651 { 00:22:07.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.651 "dma_device_type": 2 00:22:07.651 } 00:22:07.651 ], 00:22:07.651 "driver_specific": {} 00:22:07.651 } 00:22:07.651 ] 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.651 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.910 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.910 "name": "Existed_Raid", 00:22:07.910 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:07.910 "strip_size_kb": 64, 00:22:07.910 "state": "configuring", 00:22:07.910 "raid_level": "concat", 00:22:07.910 "superblock": true, 00:22:07.910 "num_base_bdevs": 3, 00:22:07.910 "num_base_bdevs_discovered": 2, 00:22:07.910 "num_base_bdevs_operational": 3, 00:22:07.910 "base_bdevs_list": [ 00:22:07.910 { 00:22:07.910 "name": "BaseBdev1", 00:22:07.910 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:07.910 "is_configured": true, 00:22:07.910 "data_offset": 2048, 00:22:07.910 "data_size": 63488 00:22:07.910 }, 00:22:07.910 { 00:22:07.910 "name": null, 00:22:07.910 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:07.910 "is_configured": false, 00:22:07.910 "data_offset": 2048, 00:22:07.910 "data_size": 63488 00:22:07.910 }, 00:22:07.910 { 00:22:07.910 "name": "BaseBdev3", 00:22:07.910 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:07.910 "is_configured": true, 00:22:07.910 "data_offset": 2048, 00:22:07.911 "data_size": 63488 00:22:07.911 } 00:22:07.911 ] 00:22:07.911 }' 00:22:07.911 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.911 07:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.169 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.169 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:08.428 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:08.428 07:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:08.687 [2024-05-16 07:35:02.097015] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.687 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.945 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.945 "name": "Existed_Raid", 00:22:08.945 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:08.945 "strip_size_kb": 64, 00:22:08.945 "state": "configuring", 00:22:08.945 "raid_level": "concat", 00:22:08.945 "superblock": true, 00:22:08.945 "num_base_bdevs": 3, 00:22:08.945 "num_base_bdevs_discovered": 1, 00:22:08.945 "num_base_bdevs_operational": 3, 00:22:08.945 "base_bdevs_list": [ 00:22:08.945 { 00:22:08.945 "name": "BaseBdev1", 00:22:08.945 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:08.945 "is_configured": true, 00:22:08.945 "data_offset": 2048, 00:22:08.945 "data_size": 63488 00:22:08.945 }, 00:22:08.945 { 00:22:08.945 "name": null, 00:22:08.945 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:08.945 "is_configured": false, 00:22:08.945 "data_offset": 2048, 00:22:08.945 "data_size": 63488 00:22:08.945 }, 00:22:08.945 { 00:22:08.945 "name": null, 00:22:08.945 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:08.945 "is_configured": false, 00:22:08.945 "data_offset": 2048, 00:22:08.945 "data_size": 63488 00:22:08.945 } 00:22:08.945 ] 00:22:08.945 }' 00:22:08.945 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.945 07:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.202 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.202 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:09.460 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:22:09.460 07:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:09.718 [2024-05-16 07:35:03.253111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.718 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.977 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.977 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.977 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.977 "name": "Existed_Raid", 00:22:09.977 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:09.977 "strip_size_kb": 64, 00:22:09.977 "state": "configuring", 00:22:09.977 "raid_level": "concat", 00:22:09.977 "superblock": true, 00:22:09.977 "num_base_bdevs": 3, 00:22:09.977 "num_base_bdevs_discovered": 2, 00:22:09.977 "num_base_bdevs_operational": 3, 00:22:09.977 "base_bdevs_list": [ 00:22:09.977 { 00:22:09.977 "name": "BaseBdev1", 00:22:09.977 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:09.977 "is_configured": true, 00:22:09.977 "data_offset": 2048, 00:22:09.977 "data_size": 63488 00:22:09.977 }, 00:22:09.977 { 00:22:09.977 "name": null, 00:22:09.977 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:09.977 "is_configured": false, 00:22:09.977 "data_offset": 2048, 00:22:09.977 "data_size": 63488 00:22:09.977 }, 00:22:09.977 { 00:22:09.977 "name": "BaseBdev3", 00:22:09.977 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:09.977 "is_configured": true, 00:22:09.977 "data_offset": 2048, 00:22:09.977 "data_size": 63488 00:22:09.977 } 00:22:09.977 ] 00:22:09.977 }' 00:22:09.977 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.977 07:35:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.237 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.237 07:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:10.803 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:22:10.803 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:10.803 [2024-05-16 07:35:04.345179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:11.061 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:11.061 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:11.061 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:11.061 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:11.061 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.062 "name": "Existed_Raid", 00:22:11.062 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:11.062 "strip_size_kb": 64, 00:22:11.062 "state": "configuring", 00:22:11.062 "raid_level": "concat", 00:22:11.062 "superblock": true, 00:22:11.062 "num_base_bdevs": 3, 00:22:11.062 "num_base_bdevs_discovered": 1, 00:22:11.062 "num_base_bdevs_operational": 3, 00:22:11.062 "base_bdevs_list": [ 00:22:11.062 { 00:22:11.062 "name": null, 00:22:11.062 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:11.062 "is_configured": false, 00:22:11.062 "data_offset": 2048, 00:22:11.062 "data_size": 63488 00:22:11.062 }, 00:22:11.062 { 00:22:11.062 "name": null, 00:22:11.062 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:11.062 "is_configured": false, 00:22:11.062 "data_offset": 2048, 00:22:11.062 "data_size": 63488 00:22:11.062 }, 00:22:11.062 { 00:22:11.062 "name": "BaseBdev3", 00:22:11.062 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:11.062 "is_configured": true, 00:22:11.062 "data_offset": 2048, 00:22:11.062 "data_size": 63488 00:22:11.062 } 00:22:11.062 ] 00:22:11.062 }' 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.062 07:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.319 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.319 07:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:11.886 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:22:11.886 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:11.886 [2024-05-16 07:35:05.377959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.886 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:11.886 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:11.886 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:11.886 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.887 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.145 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.145 "name": "Existed_Raid", 00:22:12.145 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:12.145 "strip_size_kb": 64, 00:22:12.145 "state": "configuring", 00:22:12.145 "raid_level": "concat", 00:22:12.145 "superblock": true, 00:22:12.145 "num_base_bdevs": 3, 00:22:12.145 "num_base_bdevs_discovered": 2, 00:22:12.145 "num_base_bdevs_operational": 3, 00:22:12.145 "base_bdevs_list": [ 00:22:12.145 { 00:22:12.145 "name": null, 00:22:12.145 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:12.145 "is_configured": false, 00:22:12.145 "data_offset": 2048, 00:22:12.145 "data_size": 63488 00:22:12.145 }, 00:22:12.145 { 00:22:12.145 "name": "BaseBdev2", 00:22:12.145 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:12.145 "is_configured": true, 00:22:12.145 "data_offset": 2048, 00:22:12.145 "data_size": 63488 00:22:12.145 }, 00:22:12.145 { 00:22:12.145 "name": "BaseBdev3", 00:22:12.145 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:12.145 "is_configured": true, 00:22:12.145 "data_offset": 2048, 00:22:12.145 "data_size": 63488 00:22:12.145 } 00:22:12.145 ] 00:22:12.145 }' 00:22:12.145 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.145 07:35:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.403 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.403 07:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:12.971 07:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:22:12.971 07:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.971 07:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:12.971 07:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u cd6aaa53-1356-11ef-8e8f-9dd684e56d79 00:22:13.229 [2024-05-16 07:35:06.634136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.229 [2024-05-16 07:35:06.634181] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a53ba00 00:22:13.229 [2024-05-16 07:35:06.634185] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:13.229 [2024-05-16 07:35:06.634201] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a59ee20 00:22:13.229 [2024-05-16 07:35:06.634232] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a53ba00 00:22:13.229 [2024-05-16 07:35:06.634235] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a53ba00 00:22:13.229 [2024-05-16 07:35:06.634249] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.229 NewBaseBdev 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:13.229 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.488 07:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.488 [ 00:22:13.488 { 00:22:13.488 "name": "NewBaseBdev", 00:22:13.488 "aliases": [ 00:22:13.488 "cd6aaa53-1356-11ef-8e8f-9dd684e56d79" 00:22:13.488 ], 00:22:13.488 "product_name": "Malloc disk", 00:22:13.488 "block_size": 512, 00:22:13.488 "num_blocks": 65536, 00:22:13.488 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:13.488 "assigned_rate_limits": { 00:22:13.488 "rw_ios_per_sec": 0, 00:22:13.488 "rw_mbytes_per_sec": 0, 00:22:13.488 "r_mbytes_per_sec": 0, 00:22:13.488 "w_mbytes_per_sec": 0 00:22:13.488 }, 00:22:13.488 "claimed": true, 00:22:13.488 "claim_type": "exclusive_write", 00:22:13.488 "zoned": false, 00:22:13.488 "supported_io_types": { 00:22:13.488 "read": true, 00:22:13.488 "write": true, 00:22:13.488 "unmap": true, 00:22:13.488 "write_zeroes": true, 00:22:13.488 "flush": true, 00:22:13.488 "reset": true, 00:22:13.488 "compare": false, 00:22:13.488 "compare_and_write": false, 00:22:13.488 "abort": true, 00:22:13.488 "nvme_admin": false, 00:22:13.488 "nvme_io": false 00:22:13.488 }, 00:22:13.488 "memory_domains": [ 00:22:13.488 { 00:22:13.488 "dma_device_id": "system", 00:22:13.488 "dma_device_type": 1 00:22:13.488 }, 00:22:13.488 { 00:22:13.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.488 "dma_device_type": 2 00:22:13.488 } 00:22:13.488 ], 00:22:13.488 "driver_specific": {} 00:22:13.488 } 00:22:13.488 ] 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.488 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.747 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.747 "name": "Existed_Raid", 00:22:13.747 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:13.747 "strip_size_kb": 64, 00:22:13.747 "state": "online", 00:22:13.747 "raid_level": "concat", 00:22:13.747 "superblock": true, 00:22:13.747 "num_base_bdevs": 3, 00:22:13.747 "num_base_bdevs_discovered": 3, 00:22:13.747 "num_base_bdevs_operational": 3, 00:22:13.747 "base_bdevs_list": [ 00:22:13.747 { 00:22:13.747 "name": "NewBaseBdev", 00:22:13.747 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:13.747 "is_configured": true, 00:22:13.747 "data_offset": 2048, 00:22:13.747 "data_size": 63488 00:22:13.747 }, 00:22:13.747 { 00:22:13.747 "name": "BaseBdev2", 00:22:13.747 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:13.747 "is_configured": true, 00:22:13.747 "data_offset": 2048, 00:22:13.747 "data_size": 63488 00:22:13.747 }, 00:22:13.747 { 00:22:13.747 "name": "BaseBdev3", 00:22:13.747 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:13.747 "is_configured": true, 00:22:13.747 "data_offset": 2048, 00:22:13.747 "data_size": 63488 00:22:13.747 } 00:22:13.747 ] 00:22:13.747 }' 00:22:13.747 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.747 07:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:14.005 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:14.263 [2024-05-16 07:35:07.798143] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.263 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:14.263 "name": "Existed_Raid", 00:22:14.263 "aliases": [ 00:22:14.263 "cc49f7be-1356-11ef-8e8f-9dd684e56d79" 00:22:14.263 ], 00:22:14.263 "product_name": "Raid Volume", 00:22:14.263 "block_size": 512, 00:22:14.263 "num_blocks": 190464, 00:22:14.263 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:14.263 "assigned_rate_limits": { 00:22:14.263 "rw_ios_per_sec": 0, 00:22:14.263 "rw_mbytes_per_sec": 0, 00:22:14.263 "r_mbytes_per_sec": 0, 00:22:14.263 "w_mbytes_per_sec": 0 00:22:14.263 }, 00:22:14.263 "claimed": false, 00:22:14.263 "zoned": false, 00:22:14.263 "supported_io_types": { 00:22:14.263 "read": true, 00:22:14.263 "write": true, 00:22:14.263 "unmap": true, 00:22:14.263 "write_zeroes": true, 00:22:14.263 "flush": true, 00:22:14.263 "reset": true, 00:22:14.264 "compare": false, 00:22:14.264 "compare_and_write": false, 00:22:14.264 "abort": false, 00:22:14.264 "nvme_admin": false, 00:22:14.264 "nvme_io": false 00:22:14.264 }, 00:22:14.264 "memory_domains": [ 00:22:14.264 { 00:22:14.264 "dma_device_id": "system", 00:22:14.264 "dma_device_type": 1 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.264 "dma_device_type": 2 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "dma_device_id": "system", 00:22:14.264 "dma_device_type": 1 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.264 "dma_device_type": 2 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "dma_device_id": "system", 00:22:14.264 "dma_device_type": 1 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.264 "dma_device_type": 2 00:22:14.264 } 00:22:14.264 ], 00:22:14.264 "driver_specific": { 00:22:14.264 "raid": { 00:22:14.264 "uuid": "cc49f7be-1356-11ef-8e8f-9dd684e56d79", 00:22:14.264 "strip_size_kb": 64, 00:22:14.264 "state": "online", 00:22:14.264 "raid_level": "concat", 00:22:14.264 "superblock": true, 00:22:14.264 "num_base_bdevs": 3, 00:22:14.264 "num_base_bdevs_discovered": 3, 00:22:14.264 "num_base_bdevs_operational": 3, 00:22:14.264 "base_bdevs_list": [ 00:22:14.264 { 00:22:14.264 "name": "NewBaseBdev", 00:22:14.264 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:14.264 "is_configured": true, 00:22:14.264 "data_offset": 2048, 00:22:14.264 "data_size": 63488 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "name": "BaseBdev2", 00:22:14.264 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:14.264 "is_configured": true, 00:22:14.264 "data_offset": 2048, 00:22:14.264 "data_size": 63488 00:22:14.264 }, 00:22:14.264 { 00:22:14.264 "name": "BaseBdev3", 00:22:14.264 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:14.264 "is_configured": true, 00:22:14.264 "data_offset": 2048, 00:22:14.264 "data_size": 63488 00:22:14.264 } 00:22:14.264 ] 00:22:14.264 } 00:22:14.264 } 00:22:14.264 }' 00:22:14.264 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.522 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:22:14.522 BaseBdev2 00:22:14.522 BaseBdev3' 00:22:14.522 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:14.522 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:14.522 07:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:14.522 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:14.522 "name": "NewBaseBdev", 00:22:14.522 "aliases": [ 00:22:14.522 "cd6aaa53-1356-11ef-8e8f-9dd684e56d79" 00:22:14.522 ], 00:22:14.522 "product_name": "Malloc disk", 00:22:14.522 "block_size": 512, 00:22:14.522 "num_blocks": 65536, 00:22:14.522 "uuid": "cd6aaa53-1356-11ef-8e8f-9dd684e56d79", 00:22:14.522 "assigned_rate_limits": { 00:22:14.522 "rw_ios_per_sec": 0, 00:22:14.522 "rw_mbytes_per_sec": 0, 00:22:14.522 "r_mbytes_per_sec": 0, 00:22:14.522 "w_mbytes_per_sec": 0 00:22:14.522 }, 00:22:14.522 "claimed": true, 00:22:14.522 "claim_type": "exclusive_write", 00:22:14.522 "zoned": false, 00:22:14.522 "supported_io_types": { 00:22:14.522 "read": true, 00:22:14.522 "write": true, 00:22:14.522 "unmap": true, 00:22:14.522 "write_zeroes": true, 00:22:14.522 "flush": true, 00:22:14.522 "reset": true, 00:22:14.522 "compare": false, 00:22:14.522 "compare_and_write": false, 00:22:14.522 "abort": true, 00:22:14.522 "nvme_admin": false, 00:22:14.522 "nvme_io": false 00:22:14.522 }, 00:22:14.522 "memory_domains": [ 00:22:14.522 { 00:22:14.522 "dma_device_id": "system", 00:22:14.522 "dma_device_type": 1 00:22:14.522 }, 00:22:14.522 { 00:22:14.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.522 "dma_device_type": 2 00:22:14.522 } 00:22:14.522 ], 00:22:14.522 "driver_specific": {} 00:22:14.522 }' 00:22:14.522 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:14.522 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:14.781 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:15.041 "name": "BaseBdev2", 00:22:15.041 "aliases": [ 00:22:15.041 "cb4c1044-1356-11ef-8e8f-9dd684e56d79" 00:22:15.041 ], 00:22:15.041 "product_name": "Malloc disk", 00:22:15.041 "block_size": 512, 00:22:15.041 "num_blocks": 65536, 00:22:15.041 "uuid": "cb4c1044-1356-11ef-8e8f-9dd684e56d79", 00:22:15.041 "assigned_rate_limits": { 00:22:15.041 "rw_ios_per_sec": 0, 00:22:15.041 "rw_mbytes_per_sec": 0, 00:22:15.041 "r_mbytes_per_sec": 0, 00:22:15.041 "w_mbytes_per_sec": 0 00:22:15.041 }, 00:22:15.041 "claimed": true, 00:22:15.041 "claim_type": "exclusive_write", 00:22:15.041 "zoned": false, 00:22:15.041 "supported_io_types": { 00:22:15.041 "read": true, 00:22:15.041 "write": true, 00:22:15.041 "unmap": true, 00:22:15.041 "write_zeroes": true, 00:22:15.041 "flush": true, 00:22:15.041 "reset": true, 00:22:15.041 "compare": false, 00:22:15.041 "compare_and_write": false, 00:22:15.041 "abort": true, 00:22:15.041 "nvme_admin": false, 00:22:15.041 "nvme_io": false 00:22:15.041 }, 00:22:15.041 "memory_domains": [ 00:22:15.041 { 00:22:15.041 "dma_device_id": "system", 00:22:15.041 "dma_device_type": 1 00:22:15.041 }, 00:22:15.041 { 00:22:15.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.041 "dma_device_type": 2 00:22:15.041 } 00:22:15.041 ], 00:22:15.041 "driver_specific": {} 00:22:15.041 }' 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:15.041 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:15.300 "name": "BaseBdev3", 00:22:15.300 "aliases": [ 00:22:15.300 "cbccd855-1356-11ef-8e8f-9dd684e56d79" 00:22:15.300 ], 00:22:15.300 "product_name": "Malloc disk", 00:22:15.300 "block_size": 512, 00:22:15.300 "num_blocks": 65536, 00:22:15.300 "uuid": "cbccd855-1356-11ef-8e8f-9dd684e56d79", 00:22:15.300 "assigned_rate_limits": { 00:22:15.300 "rw_ios_per_sec": 0, 00:22:15.300 "rw_mbytes_per_sec": 0, 00:22:15.300 "r_mbytes_per_sec": 0, 00:22:15.300 "w_mbytes_per_sec": 0 00:22:15.300 }, 00:22:15.300 "claimed": true, 00:22:15.300 "claim_type": "exclusive_write", 00:22:15.300 "zoned": false, 00:22:15.300 "supported_io_types": { 00:22:15.300 "read": true, 00:22:15.300 "write": true, 00:22:15.300 "unmap": true, 00:22:15.300 "write_zeroes": true, 00:22:15.300 "flush": true, 00:22:15.300 "reset": true, 00:22:15.300 "compare": false, 00:22:15.300 "compare_and_write": false, 00:22:15.300 "abort": true, 00:22:15.300 "nvme_admin": false, 00:22:15.300 "nvme_io": false 00:22:15.300 }, 00:22:15.300 "memory_domains": [ 00:22:15.300 { 00:22:15.300 "dma_device_id": "system", 00:22:15.300 "dma_device_type": 1 00:22:15.300 }, 00:22:15.300 { 00:22:15.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.300 "dma_device_type": 2 00:22:15.300 } 00:22:15.300 ], 00:22:15.300 "driver_specific": {} 00:22:15.300 }' 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:15.300 07:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.559 [2024-05-16 07:35:08.990158] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.559 [2024-05-16 07:35:08.990180] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.559 [2024-05-16 07:35:08.990194] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.559 [2024-05-16 07:35:08.990206] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.559 [2024-05-16 07:35:08.990210] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a53ba00 name Existed_Raid, state offline 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 54576 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 54576 ']' 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 54576 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 54576 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:22:15.559 killing process with pid 54576 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 54576' 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 54576 00:22:15.559 [2024-05-16 07:35:09.015253] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.559 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 54576 00:22:15.559 [2024-05-16 07:35:09.029552] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.818 07:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:22:15.818 00:22:15.818 real 0m23.560s 00:22:15.818 user 0m43.189s 00:22:15.818 sys 0m3.166s 00:22:15.818 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:15.818 ************************************ 00:22:15.818 END TEST raid_state_function_test_sb 00:22:15.818 ************************************ 00:22:15.818 07:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.818 07:35:09 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:22:15.818 07:35:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:15.818 07:35:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:15.818 07:35:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.818 ************************************ 00:22:15.818 START TEST raid_superblock_test 00:22:15.818 ************************************ 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=55304 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 55304 /var/tmp/spdk-raid.sock 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 55304 ']' 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.818 07:35:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.818 [2024-05-16 07:35:09.245692] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:15.818 [2024-05-16 07:35:09.245981] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:16.385 EAL: TSC is not safe to use in SMP mode 00:22:16.385 EAL: TSC is not invariant 00:22:16.385 [2024-05-16 07:35:09.717664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.385 [2024-05-16 07:35:09.800753] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:16.385 [2024-05-16 07:35:09.802794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.385 [2024-05-16 07:35:09.803506] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.385 [2024-05-16 07:35:09.803519] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:16.951 malloc1 00:22:16.951 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.210 [2024-05-16 07:35:10.666139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.210 [2024-05-16 07:35:10.666193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.210 [2024-05-16 07:35:10.666754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b47f780 00:22:17.210 [2024-05-16 07:35:10.666779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.210 [2024-05-16 07:35:10.667532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.210 [2024-05-16 07:35:10.667567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.210 pt1 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.210 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:17.468 malloc2 00:22:17.468 07:35:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.727 [2024-05-16 07:35:11.130185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.727 [2024-05-16 07:35:11.130240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.727 [2024-05-16 07:35:11.130264] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b47fc80 00:22:17.727 [2024-05-16 07:35:11.130271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.727 [2024-05-16 07:35:11.130783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.727 [2024-05-16 07:35:11.130817] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.727 pt2 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.727 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:17.996 malloc3 00:22:17.996 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:18.255 [2024-05-16 07:35:11.646227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:18.255 [2024-05-16 07:35:11.646284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.255 [2024-05-16 07:35:11.646325] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b480180 00:22:18.255 [2024-05-16 07:35:11.646333] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.255 [2024-05-16 07:35:11.646834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.255 [2024-05-16 07:35:11.646858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:18.255 pt3 00:22:18.255 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:18.255 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:18.255 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:18.513 [2024-05-16 07:35:11.910248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.513 [2024-05-16 07:35:11.910652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.513 [2024-05-16 07:35:11.910665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:18.513 [2024-05-16 07:35:11.910707] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b480400 00:22:18.513 [2024-05-16 07:35:11.910712] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:18.513 [2024-05-16 07:35:11.910741] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b4e2e20 00:22:18.513 [2024-05-16 07:35:11.910808] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b480400 00:22:18.513 [2024-05-16 07:35:11.910812] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b480400 00:22:18.513 [2024-05-16 07:35:11.910839] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.513 07:35:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.771 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.771 "name": "raid_bdev1", 00:22:18.771 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:18.771 "strip_size_kb": 64, 00:22:18.771 "state": "online", 00:22:18.771 "raid_level": "concat", 00:22:18.771 "superblock": true, 00:22:18.771 "num_base_bdevs": 3, 00:22:18.771 "num_base_bdevs_discovered": 3, 00:22:18.771 "num_base_bdevs_operational": 3, 00:22:18.771 "base_bdevs_list": [ 00:22:18.771 { 00:22:18.771 "name": "pt1", 00:22:18.771 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:18.771 "is_configured": true, 00:22:18.771 "data_offset": 2048, 00:22:18.771 "data_size": 63488 00:22:18.771 }, 00:22:18.771 { 00:22:18.771 "name": "pt2", 00:22:18.771 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:18.771 "is_configured": true, 00:22:18.771 "data_offset": 2048, 00:22:18.772 "data_size": 63488 00:22:18.772 }, 00:22:18.772 { 00:22:18.772 "name": "pt3", 00:22:18.772 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:18.772 "is_configured": true, 00:22:18.772 "data_offset": 2048, 00:22:18.772 "data_size": 63488 00:22:18.772 } 00:22:18.772 ] 00:22:18.772 }' 00:22:18.772 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.772 07:35:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:19.029 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:19.029 [2024-05-16 07:35:12.582318] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:19.288 "name": "raid_bdev1", 00:22:19.288 "aliases": [ 00:22:19.288 "d42545d9-1356-11ef-8e8f-9dd684e56d79" 00:22:19.288 ], 00:22:19.288 "product_name": "Raid Volume", 00:22:19.288 "block_size": 512, 00:22:19.288 "num_blocks": 190464, 00:22:19.288 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:19.288 "assigned_rate_limits": { 00:22:19.288 "rw_ios_per_sec": 0, 00:22:19.288 "rw_mbytes_per_sec": 0, 00:22:19.288 "r_mbytes_per_sec": 0, 00:22:19.288 "w_mbytes_per_sec": 0 00:22:19.288 }, 00:22:19.288 "claimed": false, 00:22:19.288 "zoned": false, 00:22:19.288 "supported_io_types": { 00:22:19.288 "read": true, 00:22:19.288 "write": true, 00:22:19.288 "unmap": true, 00:22:19.288 "write_zeroes": true, 00:22:19.288 "flush": true, 00:22:19.288 "reset": true, 00:22:19.288 "compare": false, 00:22:19.288 "compare_and_write": false, 00:22:19.288 "abort": false, 00:22:19.288 "nvme_admin": false, 00:22:19.288 "nvme_io": false 00:22:19.288 }, 00:22:19.288 "memory_domains": [ 00:22:19.288 { 00:22:19.288 "dma_device_id": "system", 00:22:19.288 "dma_device_type": 1 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.288 "dma_device_type": 2 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "dma_device_id": "system", 00:22:19.288 "dma_device_type": 1 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.288 "dma_device_type": 2 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "dma_device_id": "system", 00:22:19.288 "dma_device_type": 1 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.288 "dma_device_type": 2 00:22:19.288 } 00:22:19.288 ], 00:22:19.288 "driver_specific": { 00:22:19.288 "raid": { 00:22:19.288 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:19.288 "strip_size_kb": 64, 00:22:19.288 "state": "online", 00:22:19.288 "raid_level": "concat", 00:22:19.288 "superblock": true, 00:22:19.288 "num_base_bdevs": 3, 00:22:19.288 "num_base_bdevs_discovered": 3, 00:22:19.288 "num_base_bdevs_operational": 3, 00:22:19.288 "base_bdevs_list": [ 00:22:19.288 { 00:22:19.288 "name": "pt1", 00:22:19.288 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:19.288 "is_configured": true, 00:22:19.288 "data_offset": 2048, 00:22:19.288 "data_size": 63488 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "name": "pt2", 00:22:19.288 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:19.288 "is_configured": true, 00:22:19.288 "data_offset": 2048, 00:22:19.288 "data_size": 63488 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "name": "pt3", 00:22:19.288 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:19.288 "is_configured": true, 00:22:19.288 "data_offset": 2048, 00:22:19.288 "data_size": 63488 00:22:19.288 } 00:22:19.288 ] 00:22:19.288 } 00:22:19.288 } 00:22:19.288 }' 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:19.288 pt2 00:22:19.288 pt3' 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:19.288 "name": "pt1", 00:22:19.288 "aliases": [ 00:22:19.288 "3c518f43-6e0d-7655-a347-12ff6d3a3960" 00:22:19.288 ], 00:22:19.288 "product_name": "passthru", 00:22:19.288 "block_size": 512, 00:22:19.288 "num_blocks": 65536, 00:22:19.288 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:19.288 "assigned_rate_limits": { 00:22:19.288 "rw_ios_per_sec": 0, 00:22:19.288 "rw_mbytes_per_sec": 0, 00:22:19.288 "r_mbytes_per_sec": 0, 00:22:19.288 "w_mbytes_per_sec": 0 00:22:19.288 }, 00:22:19.288 "claimed": true, 00:22:19.288 "claim_type": "exclusive_write", 00:22:19.288 "zoned": false, 00:22:19.288 "supported_io_types": { 00:22:19.288 "read": true, 00:22:19.288 "write": true, 00:22:19.288 "unmap": true, 00:22:19.288 "write_zeroes": true, 00:22:19.288 "flush": true, 00:22:19.288 "reset": true, 00:22:19.288 "compare": false, 00:22:19.288 "compare_and_write": false, 00:22:19.288 "abort": true, 00:22:19.288 "nvme_admin": false, 00:22:19.288 "nvme_io": false 00:22:19.288 }, 00:22:19.288 "memory_domains": [ 00:22:19.288 { 00:22:19.288 "dma_device_id": "system", 00:22:19.288 "dma_device_type": 1 00:22:19.288 }, 00:22:19.288 { 00:22:19.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.288 "dma_device_type": 2 00:22:19.288 } 00:22:19.288 ], 00:22:19.288 "driver_specific": { 00:22:19.288 "passthru": { 00:22:19.288 "name": "pt1", 00:22:19.288 "base_bdev_name": "malloc1" 00:22:19.288 } 00:22:19.288 } 00:22:19.288 }' 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.288 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:19.545 07:35:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:19.802 "name": "pt2", 00:22:19.802 "aliases": [ 00:22:19.802 "59887f9e-bc73-eb50-abca-1d0eaae3c586" 00:22:19.802 ], 00:22:19.802 "product_name": "passthru", 00:22:19.802 "block_size": 512, 00:22:19.802 "num_blocks": 65536, 00:22:19.802 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:19.802 "assigned_rate_limits": { 00:22:19.802 "rw_ios_per_sec": 0, 00:22:19.802 "rw_mbytes_per_sec": 0, 00:22:19.802 "r_mbytes_per_sec": 0, 00:22:19.802 "w_mbytes_per_sec": 0 00:22:19.802 }, 00:22:19.802 "claimed": true, 00:22:19.802 "claim_type": "exclusive_write", 00:22:19.802 "zoned": false, 00:22:19.802 "supported_io_types": { 00:22:19.802 "read": true, 00:22:19.802 "write": true, 00:22:19.802 "unmap": true, 00:22:19.802 "write_zeroes": true, 00:22:19.802 "flush": true, 00:22:19.802 "reset": true, 00:22:19.802 "compare": false, 00:22:19.802 "compare_and_write": false, 00:22:19.802 "abort": true, 00:22:19.802 "nvme_admin": false, 00:22:19.802 "nvme_io": false 00:22:19.802 }, 00:22:19.802 "memory_domains": [ 00:22:19.802 { 00:22:19.802 "dma_device_id": "system", 00:22:19.802 "dma_device_type": 1 00:22:19.802 }, 00:22:19.802 { 00:22:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.802 "dma_device_type": 2 00:22:19.802 } 00:22:19.802 ], 00:22:19.802 "driver_specific": { 00:22:19.802 "passthru": { 00:22:19.802 "name": "pt2", 00:22:19.802 "base_bdev_name": "malloc2" 00:22:19.802 } 00:22:19.802 } 00:22:19.802 }' 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:19.802 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:20.060 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:20.060 "name": "pt3", 00:22:20.060 "aliases": [ 00:22:20.060 "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469" 00:22:20.060 ], 00:22:20.060 "product_name": "passthru", 00:22:20.060 "block_size": 512, 00:22:20.060 "num_blocks": 65536, 00:22:20.060 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:20.060 "assigned_rate_limits": { 00:22:20.060 "rw_ios_per_sec": 0, 00:22:20.060 "rw_mbytes_per_sec": 0, 00:22:20.060 "r_mbytes_per_sec": 0, 00:22:20.060 "w_mbytes_per_sec": 0 00:22:20.060 }, 00:22:20.060 "claimed": true, 00:22:20.060 "claim_type": "exclusive_write", 00:22:20.060 "zoned": false, 00:22:20.060 "supported_io_types": { 00:22:20.060 "read": true, 00:22:20.060 "write": true, 00:22:20.061 "unmap": true, 00:22:20.061 "write_zeroes": true, 00:22:20.061 "flush": true, 00:22:20.061 "reset": true, 00:22:20.061 "compare": false, 00:22:20.061 "compare_and_write": false, 00:22:20.061 "abort": true, 00:22:20.061 "nvme_admin": false, 00:22:20.061 "nvme_io": false 00:22:20.061 }, 00:22:20.061 "memory_domains": [ 00:22:20.061 { 00:22:20.061 "dma_device_id": "system", 00:22:20.061 "dma_device_type": 1 00:22:20.061 }, 00:22:20.061 { 00:22:20.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.061 "dma_device_type": 2 00:22:20.061 } 00:22:20.061 ], 00:22:20.061 "driver_specific": { 00:22:20.061 "passthru": { 00:22:20.061 "name": "pt3", 00:22:20.061 "base_bdev_name": "malloc3" 00:22:20.061 } 00:22:20.061 } 00:22:20.061 }' 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:20.061 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:20.318 [2024-05-16 07:35:13.826442] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.318 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d42545d9-1356-11ef-8e8f-9dd684e56d79 00:22:20.318 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d42545d9-1356-11ef-8e8f-9dd684e56d79 ']' 00:22:20.318 07:35:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:20.576 [2024-05-16 07:35:14.046417] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.576 [2024-05-16 07:35:14.046438] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.576 [2024-05-16 07:35:14.046453] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.576 [2024-05-16 07:35:14.046466] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.576 [2024-05-16 07:35:14.046469] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b480400 name raid_bdev1, state offline 00:22:20.576 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.576 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:20.833 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:20.833 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:20.833 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:20.833 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:21.091 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.091 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:21.658 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:21.658 07:35:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:21.658 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:21.658 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:21.915 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:21.915 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:21.915 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:21.915 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:21.916 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:22.174 [2024-05-16 07:35:15.618547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:22.174 [2024-05-16 07:35:15.618975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:22.174 [2024-05-16 07:35:15.619002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:22.174 [2024-05-16 07:35:15.619014] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:22.174 [2024-05-16 07:35:15.619050] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:22.174 [2024-05-16 07:35:15.619058] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:22.174 [2024-05-16 07:35:15.619066] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.174 [2024-05-16 07:35:15.619070] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b480180 name raid_bdev1, state configuring 00:22:22.174 request: 00:22:22.174 { 00:22:22.174 "name": "raid_bdev1", 00:22:22.174 "raid_level": "concat", 00:22:22.174 "base_bdevs": [ 00:22:22.174 "malloc1", 00:22:22.174 "malloc2", 00:22:22.174 "malloc3" 00:22:22.174 ], 00:22:22.174 "superblock": false, 00:22:22.174 "strip_size_kb": 64, 00:22:22.174 "method": "bdev_raid_create", 00:22:22.174 "req_id": 1 00:22:22.174 } 00:22:22.174 Got JSON-RPC error response 00:22:22.174 response: 00:22:22.174 { 00:22:22.174 "code": -17, 00:22:22.174 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:22.174 } 00:22:22.174 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:22.174 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.174 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.174 07:35:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.174 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.174 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:22.432 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:22.432 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:22.432 07:35:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:22.691 [2024-05-16 07:35:16.130597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:22.691 [2024-05-16 07:35:16.130643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.691 [2024-05-16 07:35:16.130669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b47fc80 00:22:22.691 [2024-05-16 07:35:16.130676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.691 [2024-05-16 07:35:16.131145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.691 [2024-05-16 07:35:16.131168] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:22.691 [2024-05-16 07:35:16.131186] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:22.691 [2024-05-16 07:35:16.131195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:22.691 pt1 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.691 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.949 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.949 "name": "raid_bdev1", 00:22:22.949 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:22.949 "strip_size_kb": 64, 00:22:22.949 "state": "configuring", 00:22:22.949 "raid_level": "concat", 00:22:22.949 "superblock": true, 00:22:22.949 "num_base_bdevs": 3, 00:22:22.949 "num_base_bdevs_discovered": 1, 00:22:22.949 "num_base_bdevs_operational": 3, 00:22:22.949 "base_bdevs_list": [ 00:22:22.949 { 00:22:22.949 "name": "pt1", 00:22:22.949 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:22.949 "is_configured": true, 00:22:22.949 "data_offset": 2048, 00:22:22.949 "data_size": 63488 00:22:22.949 }, 00:22:22.949 { 00:22:22.949 "name": null, 00:22:22.949 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:22.949 "is_configured": false, 00:22:22.949 "data_offset": 2048, 00:22:22.949 "data_size": 63488 00:22:22.949 }, 00:22:22.949 { 00:22:22.949 "name": null, 00:22:22.949 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:22.949 "is_configured": false, 00:22:22.949 "data_offset": 2048, 00:22:22.949 "data_size": 63488 00:22:22.949 } 00:22:22.949 ] 00:22:22.949 }' 00:22:22.949 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.949 07:35:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.516 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:23.516 07:35:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:23.516 [2024-05-16 07:35:17.030665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:23.516 [2024-05-16 07:35:17.030716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.516 [2024-05-16 07:35:17.030742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b480680 00:22:23.516 [2024-05-16 07:35:17.030748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.517 [2024-05-16 07:35:17.030843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.517 [2024-05-16 07:35:17.030850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:23.517 [2024-05-16 07:35:17.030867] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:23.517 [2024-05-16 07:35:17.030874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:23.517 pt2 00:22:23.517 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:23.774 [2024-05-16 07:35:17.294676] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.774 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.033 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.033 "name": "raid_bdev1", 00:22:24.033 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:24.033 "strip_size_kb": 64, 00:22:24.033 "state": "configuring", 00:22:24.033 "raid_level": "concat", 00:22:24.033 "superblock": true, 00:22:24.033 "num_base_bdevs": 3, 00:22:24.033 "num_base_bdevs_discovered": 1, 00:22:24.033 "num_base_bdevs_operational": 3, 00:22:24.033 "base_bdevs_list": [ 00:22:24.033 { 00:22:24.033 "name": "pt1", 00:22:24.033 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:24.033 "is_configured": true, 00:22:24.033 "data_offset": 2048, 00:22:24.033 "data_size": 63488 00:22:24.033 }, 00:22:24.033 { 00:22:24.033 "name": null, 00:22:24.033 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:24.033 "is_configured": false, 00:22:24.033 "data_offset": 2048, 00:22:24.033 "data_size": 63488 00:22:24.033 }, 00:22:24.033 { 00:22:24.033 "name": null, 00:22:24.033 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:24.033 "is_configured": false, 00:22:24.033 "data_offset": 2048, 00:22:24.033 "data_size": 63488 00:22:24.033 } 00:22:24.033 ] 00:22:24.033 }' 00:22:24.033 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.033 07:35:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.616 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:24.616 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:24.616 07:35:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:24.616 [2024-05-16 07:35:18.094736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:24.616 [2024-05-16 07:35:18.094784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.616 [2024-05-16 07:35:18.094804] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b480680 00:22:24.616 [2024-05-16 07:35:18.094811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.616 [2024-05-16 07:35:18.094878] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.616 [2024-05-16 07:35:18.094901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:24.616 [2024-05-16 07:35:18.094917] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:24.616 [2024-05-16 07:35:18.094923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:24.616 pt2 00:22:24.616 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:24.616 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:24.616 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:24.875 [2024-05-16 07:35:18.374747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:24.875 [2024-05-16 07:35:18.374779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.875 [2024-05-16 07:35:18.374793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b480400 00:22:24.875 [2024-05-16 07:35:18.374799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.875 [2024-05-16 07:35:18.374853] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.875 [2024-05-16 07:35:18.374860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:24.875 [2024-05-16 07:35:18.374872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:24.875 [2024-05-16 07:35:18.374877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:24.875 [2024-05-16 07:35:18.374891] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b47f780 00:22:24.875 [2024-05-16 07:35:18.374895] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:24.875 [2024-05-16 07:35:18.374910] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b4e2e20 00:22:24.875 [2024-05-16 07:35:18.374943] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b47f780 00:22:24.875 [2024-05-16 07:35:18.374946] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b47f780 00:22:24.875 [2024-05-16 07:35:18.374960] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.875 pt3 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.875 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.134 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.134 "name": "raid_bdev1", 00:22:25.134 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:25.134 "strip_size_kb": 64, 00:22:25.134 "state": "online", 00:22:25.134 "raid_level": "concat", 00:22:25.134 "superblock": true, 00:22:25.134 "num_base_bdevs": 3, 00:22:25.134 "num_base_bdevs_discovered": 3, 00:22:25.134 "num_base_bdevs_operational": 3, 00:22:25.134 "base_bdevs_list": [ 00:22:25.134 { 00:22:25.134 "name": "pt1", 00:22:25.134 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:25.134 "is_configured": true, 00:22:25.134 "data_offset": 2048, 00:22:25.134 "data_size": 63488 00:22:25.134 }, 00:22:25.134 { 00:22:25.134 "name": "pt2", 00:22:25.134 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:25.134 "is_configured": true, 00:22:25.134 "data_offset": 2048, 00:22:25.134 "data_size": 63488 00:22:25.134 }, 00:22:25.134 { 00:22:25.134 "name": "pt3", 00:22:25.134 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:25.134 "is_configured": true, 00:22:25.134 "data_offset": 2048, 00:22:25.134 "data_size": 63488 00:22:25.134 } 00:22:25.134 ] 00:22:25.134 }' 00:22:25.134 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.134 07:35:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:25.700 07:35:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:25.700 [2024-05-16 07:35:19.202832] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.700 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:25.700 "name": "raid_bdev1", 00:22:25.700 "aliases": [ 00:22:25.700 "d42545d9-1356-11ef-8e8f-9dd684e56d79" 00:22:25.700 ], 00:22:25.700 "product_name": "Raid Volume", 00:22:25.700 "block_size": 512, 00:22:25.700 "num_blocks": 190464, 00:22:25.700 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:25.700 "assigned_rate_limits": { 00:22:25.700 "rw_ios_per_sec": 0, 00:22:25.700 "rw_mbytes_per_sec": 0, 00:22:25.700 "r_mbytes_per_sec": 0, 00:22:25.700 "w_mbytes_per_sec": 0 00:22:25.700 }, 00:22:25.700 "claimed": false, 00:22:25.700 "zoned": false, 00:22:25.700 "supported_io_types": { 00:22:25.700 "read": true, 00:22:25.700 "write": true, 00:22:25.700 "unmap": true, 00:22:25.700 "write_zeroes": true, 00:22:25.700 "flush": true, 00:22:25.700 "reset": true, 00:22:25.700 "compare": false, 00:22:25.700 "compare_and_write": false, 00:22:25.700 "abort": false, 00:22:25.700 "nvme_admin": false, 00:22:25.700 "nvme_io": false 00:22:25.700 }, 00:22:25.700 "memory_domains": [ 00:22:25.700 { 00:22:25.700 "dma_device_id": "system", 00:22:25.700 "dma_device_type": 1 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.700 "dma_device_type": 2 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "dma_device_id": "system", 00:22:25.700 "dma_device_type": 1 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.700 "dma_device_type": 2 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "dma_device_id": "system", 00:22:25.700 "dma_device_type": 1 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.700 "dma_device_type": 2 00:22:25.700 } 00:22:25.700 ], 00:22:25.700 "driver_specific": { 00:22:25.700 "raid": { 00:22:25.700 "uuid": "d42545d9-1356-11ef-8e8f-9dd684e56d79", 00:22:25.700 "strip_size_kb": 64, 00:22:25.700 "state": "online", 00:22:25.700 "raid_level": "concat", 00:22:25.700 "superblock": true, 00:22:25.700 "num_base_bdevs": 3, 00:22:25.700 "num_base_bdevs_discovered": 3, 00:22:25.700 "num_base_bdevs_operational": 3, 00:22:25.700 "base_bdevs_list": [ 00:22:25.700 { 00:22:25.700 "name": "pt1", 00:22:25.700 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:25.700 "is_configured": true, 00:22:25.700 "data_offset": 2048, 00:22:25.700 "data_size": 63488 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "name": "pt2", 00:22:25.700 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:25.700 "is_configured": true, 00:22:25.700 "data_offset": 2048, 00:22:25.700 "data_size": 63488 00:22:25.700 }, 00:22:25.700 { 00:22:25.700 "name": "pt3", 00:22:25.700 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:25.700 "is_configured": true, 00:22:25.700 "data_offset": 2048, 00:22:25.700 "data_size": 63488 00:22:25.700 } 00:22:25.700 ] 00:22:25.700 } 00:22:25.700 } 00:22:25.700 }' 00:22:25.700 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:25.700 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:22:25.700 pt2 00:22:25.700 pt3' 00:22:25.700 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:25.700 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:25.700 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:25.958 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:25.958 "name": "pt1", 00:22:25.958 "aliases": [ 00:22:25.958 "3c518f43-6e0d-7655-a347-12ff6d3a3960" 00:22:25.958 ], 00:22:25.958 "product_name": "passthru", 00:22:25.958 "block_size": 512, 00:22:25.958 "num_blocks": 65536, 00:22:25.958 "uuid": "3c518f43-6e0d-7655-a347-12ff6d3a3960", 00:22:25.958 "assigned_rate_limits": { 00:22:25.958 "rw_ios_per_sec": 0, 00:22:25.958 "rw_mbytes_per_sec": 0, 00:22:25.958 "r_mbytes_per_sec": 0, 00:22:25.958 "w_mbytes_per_sec": 0 00:22:25.958 }, 00:22:25.958 "claimed": true, 00:22:25.958 "claim_type": "exclusive_write", 00:22:25.958 "zoned": false, 00:22:25.958 "supported_io_types": { 00:22:25.958 "read": true, 00:22:25.958 "write": true, 00:22:25.958 "unmap": true, 00:22:25.958 "write_zeroes": true, 00:22:25.958 "flush": true, 00:22:25.958 "reset": true, 00:22:25.958 "compare": false, 00:22:25.958 "compare_and_write": false, 00:22:25.958 "abort": true, 00:22:25.958 "nvme_admin": false, 00:22:25.958 "nvme_io": false 00:22:25.958 }, 00:22:25.959 "memory_domains": [ 00:22:25.959 { 00:22:25.959 "dma_device_id": "system", 00:22:25.959 "dma_device_type": 1 00:22:25.959 }, 00:22:25.959 { 00:22:25.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.959 "dma_device_type": 2 00:22:25.959 } 00:22:25.959 ], 00:22:25.959 "driver_specific": { 00:22:25.959 "passthru": { 00:22:25.959 "name": "pt1", 00:22:25.959 "base_bdev_name": "malloc1" 00:22:25.959 } 00:22:25.959 } 00:22:25.959 }' 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:25.959 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:26.217 "name": "pt2", 00:22:26.217 "aliases": [ 00:22:26.217 "59887f9e-bc73-eb50-abca-1d0eaae3c586" 00:22:26.217 ], 00:22:26.217 "product_name": "passthru", 00:22:26.217 "block_size": 512, 00:22:26.217 "num_blocks": 65536, 00:22:26.217 "uuid": "59887f9e-bc73-eb50-abca-1d0eaae3c586", 00:22:26.217 "assigned_rate_limits": { 00:22:26.217 "rw_ios_per_sec": 0, 00:22:26.217 "rw_mbytes_per_sec": 0, 00:22:26.217 "r_mbytes_per_sec": 0, 00:22:26.217 "w_mbytes_per_sec": 0 00:22:26.217 }, 00:22:26.217 "claimed": true, 00:22:26.217 "claim_type": "exclusive_write", 00:22:26.217 "zoned": false, 00:22:26.217 "supported_io_types": { 00:22:26.217 "read": true, 00:22:26.217 "write": true, 00:22:26.217 "unmap": true, 00:22:26.217 "write_zeroes": true, 00:22:26.217 "flush": true, 00:22:26.217 "reset": true, 00:22:26.217 "compare": false, 00:22:26.217 "compare_and_write": false, 00:22:26.217 "abort": true, 00:22:26.217 "nvme_admin": false, 00:22:26.217 "nvme_io": false 00:22:26.217 }, 00:22:26.217 "memory_domains": [ 00:22:26.217 { 00:22:26.217 "dma_device_id": "system", 00:22:26.217 "dma_device_type": 1 00:22:26.217 }, 00:22:26.217 { 00:22:26.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.217 "dma_device_type": 2 00:22:26.217 } 00:22:26.217 ], 00:22:26.217 "driver_specific": { 00:22:26.217 "passthru": { 00:22:26.217 "name": "pt2", 00:22:26.217 "base_bdev_name": "malloc2" 00:22:26.217 } 00:22:26.217 } 00:22:26.217 }' 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:26.217 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:26.475 "name": "pt3", 00:22:26.475 "aliases": [ 00:22:26.475 "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469" 00:22:26.475 ], 00:22:26.475 "product_name": "passthru", 00:22:26.475 "block_size": 512, 00:22:26.475 "num_blocks": 65536, 00:22:26.475 "uuid": "01d9ebb7-8771-f95d-a0f4-b09e8dd8e469", 00:22:26.475 "assigned_rate_limits": { 00:22:26.475 "rw_ios_per_sec": 0, 00:22:26.475 "rw_mbytes_per_sec": 0, 00:22:26.475 "r_mbytes_per_sec": 0, 00:22:26.475 "w_mbytes_per_sec": 0 00:22:26.475 }, 00:22:26.475 "claimed": true, 00:22:26.475 "claim_type": "exclusive_write", 00:22:26.475 "zoned": false, 00:22:26.475 "supported_io_types": { 00:22:26.475 "read": true, 00:22:26.475 "write": true, 00:22:26.475 "unmap": true, 00:22:26.475 "write_zeroes": true, 00:22:26.475 "flush": true, 00:22:26.475 "reset": true, 00:22:26.475 "compare": false, 00:22:26.475 "compare_and_write": false, 00:22:26.475 "abort": true, 00:22:26.475 "nvme_admin": false, 00:22:26.475 "nvme_io": false 00:22:26.475 }, 00:22:26.475 "memory_domains": [ 00:22:26.475 { 00:22:26.475 "dma_device_id": "system", 00:22:26.475 "dma_device_type": 1 00:22:26.475 }, 00:22:26.475 { 00:22:26.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.475 "dma_device_type": 2 00:22:26.475 } 00:22:26.475 ], 00:22:26.475 "driver_specific": { 00:22:26.475 "passthru": { 00:22:26.475 "name": "pt3", 00:22:26.475 "base_bdev_name": "malloc3" 00:22:26.475 } 00:22:26.475 } 00:22:26.475 }' 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:26.475 07:35:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:26.475 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:26.475 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:26.475 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:26.475 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.475 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:26.475 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:26.733 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:26.733 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:26.733 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:26.733 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:26.733 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:26.733 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:26.991 [2024-05-16 07:35:20.306916] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d42545d9-1356-11ef-8e8f-9dd684e56d79 '!=' d42545d9-1356-11ef-8e8f-9dd684e56d79 ']' 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 55304 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 55304 ']' 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 55304 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 55304 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55304' 00:22:26.991 killing process with pid 55304 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 55304 00:22:26.991 [2024-05-16 07:35:20.341921] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.991 [2024-05-16 07:35:20.341949] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.991 [2024-05-16 07:35:20.341970] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.991 [2024-05-16 07:35:20.341974] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b47f780 name raid_bdev1, state offline 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 55304 00:22:26.991 [2024-05-16 07:35:20.356392] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:26.991 00:22:26.991 real 0m11.289s 00:22:26.991 user 0m20.006s 00:22:26.991 sys 0m1.824s 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.991 ************************************ 00:22:26.991 END TEST raid_superblock_test 00:22:26.991 07:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.991 ************************************ 00:22:27.249 07:35:20 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:22:27.249 07:35:20 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:22:27.249 07:35:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:27.249 07:35:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:27.249 07:35:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.249 ************************************ 00:22:27.249 START TEST raid_state_function_test 00:22:27.249 ************************************ 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=55653 00:22:27.249 Process raid pid: 55653 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 55653' 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 55653 /var/tmp/spdk-raid.sock 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 55653 ']' 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:27.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:27.249 07:35:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.249 [2024-05-16 07:35:20.585964] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:27.249 [2024-05-16 07:35:20.586265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:27.579 EAL: TSC is not safe to use in SMP mode 00:22:27.579 EAL: TSC is not invariant 00:22:27.579 [2024-05-16 07:35:21.068240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.837 [2024-05-16 07:35:21.151371] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:27.837 [2024-05-16 07:35:21.153639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.837 [2024-05-16 07:35:21.154422] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.837 [2024-05-16 07:35:21.154435] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.094 07:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:28.094 07:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:22:28.094 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:28.353 [2024-05-16 07:35:21.708963] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:28.353 [2024-05-16 07:35:21.709030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:28.353 [2024-05-16 07:35:21.709034] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.353 [2024-05-16 07:35:21.709042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.353 [2024-05-16 07:35:21.709045] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:28.353 [2024-05-16 07:35:21.709052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.353 07:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.610 07:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.610 "name": "Existed_Raid", 00:22:28.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.610 "strip_size_kb": 0, 00:22:28.610 "state": "configuring", 00:22:28.610 "raid_level": "raid1", 00:22:28.610 "superblock": false, 00:22:28.610 "num_base_bdevs": 3, 00:22:28.610 "num_base_bdevs_discovered": 0, 00:22:28.610 "num_base_bdevs_operational": 3, 00:22:28.610 "base_bdevs_list": [ 00:22:28.610 { 00:22:28.610 "name": "BaseBdev1", 00:22:28.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.610 "is_configured": false, 00:22:28.610 "data_offset": 0, 00:22:28.610 "data_size": 0 00:22:28.610 }, 00:22:28.610 { 00:22:28.610 "name": "BaseBdev2", 00:22:28.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.610 "is_configured": false, 00:22:28.610 "data_offset": 0, 00:22:28.610 "data_size": 0 00:22:28.610 }, 00:22:28.610 { 00:22:28.610 "name": "BaseBdev3", 00:22:28.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.610 "is_configured": false, 00:22:28.610 "data_offset": 0, 00:22:28.610 "data_size": 0 00:22:28.610 } 00:22:28.610 ] 00:22:28.610 }' 00:22:28.610 07:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.610 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.867 07:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:29.125 [2024-05-16 07:35:22.497017] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:29.125 [2024-05-16 07:35:22.497040] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd28500 name Existed_Raid, state configuring 00:22:29.125 07:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:29.383 [2024-05-16 07:35:22.765047] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:29.383 [2024-05-16 07:35:22.765094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:29.383 [2024-05-16 07:35:22.765098] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:29.383 [2024-05-16 07:35:22.765105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:29.383 [2024-05-16 07:35:22.765108] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:29.383 [2024-05-16 07:35:22.765114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:29.383 07:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:29.640 [2024-05-16 07:35:22.961921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.640 BaseBdev1 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:29.640 07:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:29.897 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:30.155 [ 00:22:30.155 { 00:22:30.155 "name": "BaseBdev1", 00:22:30.155 "aliases": [ 00:22:30.155 "dabb7f31-1356-11ef-8e8f-9dd684e56d79" 00:22:30.155 ], 00:22:30.155 "product_name": "Malloc disk", 00:22:30.155 "block_size": 512, 00:22:30.155 "num_blocks": 65536, 00:22:30.155 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:30.155 "assigned_rate_limits": { 00:22:30.155 "rw_ios_per_sec": 0, 00:22:30.155 "rw_mbytes_per_sec": 0, 00:22:30.155 "r_mbytes_per_sec": 0, 00:22:30.155 "w_mbytes_per_sec": 0 00:22:30.155 }, 00:22:30.155 "claimed": true, 00:22:30.155 "claim_type": "exclusive_write", 00:22:30.155 "zoned": false, 00:22:30.155 "supported_io_types": { 00:22:30.155 "read": true, 00:22:30.155 "write": true, 00:22:30.155 "unmap": true, 00:22:30.155 "write_zeroes": true, 00:22:30.155 "flush": true, 00:22:30.155 "reset": true, 00:22:30.155 "compare": false, 00:22:30.155 "compare_and_write": false, 00:22:30.155 "abort": true, 00:22:30.155 "nvme_admin": false, 00:22:30.155 "nvme_io": false 00:22:30.155 }, 00:22:30.155 "memory_domains": [ 00:22:30.155 { 00:22:30.155 "dma_device_id": "system", 00:22:30.155 "dma_device_type": 1 00:22:30.155 }, 00:22:30.155 { 00:22:30.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.155 "dma_device_type": 2 00:22:30.155 } 00:22:30.155 ], 00:22:30.155 "driver_specific": {} 00:22:30.155 } 00:22:30.155 ] 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.155 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.412 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.412 "name": "Existed_Raid", 00:22:30.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.412 "strip_size_kb": 0, 00:22:30.412 "state": "configuring", 00:22:30.412 "raid_level": "raid1", 00:22:30.412 "superblock": false, 00:22:30.412 "num_base_bdevs": 3, 00:22:30.412 "num_base_bdevs_discovered": 1, 00:22:30.412 "num_base_bdevs_operational": 3, 00:22:30.412 "base_bdevs_list": [ 00:22:30.412 { 00:22:30.412 "name": "BaseBdev1", 00:22:30.412 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:30.412 "is_configured": true, 00:22:30.412 "data_offset": 0, 00:22:30.412 "data_size": 65536 00:22:30.412 }, 00:22:30.412 { 00:22:30.412 "name": "BaseBdev2", 00:22:30.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.412 "is_configured": false, 00:22:30.412 "data_offset": 0, 00:22:30.412 "data_size": 0 00:22:30.412 }, 00:22:30.412 { 00:22:30.412 "name": "BaseBdev3", 00:22:30.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.412 "is_configured": false, 00:22:30.412 "data_offset": 0, 00:22:30.412 "data_size": 0 00:22:30.412 } 00:22:30.412 ] 00:22:30.412 }' 00:22:30.412 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.412 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.670 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:30.926 [2024-05-16 07:35:24.329153] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:30.926 [2024-05-16 07:35:24.329197] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd28500 name Existed_Raid, state configuring 00:22:30.926 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:31.181 [2024-05-16 07:35:24.541159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:31.181 [2024-05-16 07:35:24.541822] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:31.181 [2024-05-16 07:35:24.541861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:31.181 [2024-05-16 07:35:24.541866] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:31.181 [2024-05-16 07:35:24.541873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.181 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.437 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.437 "name": "Existed_Raid", 00:22:31.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.437 "strip_size_kb": 0, 00:22:31.437 "state": "configuring", 00:22:31.437 "raid_level": "raid1", 00:22:31.437 "superblock": false, 00:22:31.437 "num_base_bdevs": 3, 00:22:31.437 "num_base_bdevs_discovered": 1, 00:22:31.437 "num_base_bdevs_operational": 3, 00:22:31.437 "base_bdevs_list": [ 00:22:31.437 { 00:22:31.437 "name": "BaseBdev1", 00:22:31.437 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:31.437 "is_configured": true, 00:22:31.437 "data_offset": 0, 00:22:31.437 "data_size": 65536 00:22:31.437 }, 00:22:31.437 { 00:22:31.437 "name": "BaseBdev2", 00:22:31.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.437 "is_configured": false, 00:22:31.437 "data_offset": 0, 00:22:31.437 "data_size": 0 00:22:31.437 }, 00:22:31.437 { 00:22:31.437 "name": "BaseBdev3", 00:22:31.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.437 "is_configured": false, 00:22:31.437 "data_offset": 0, 00:22:31.437 "data_size": 0 00:22:31.437 } 00:22:31.437 ] 00:22:31.437 }' 00:22:31.437 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.437 07:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.694 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:31.951 [2024-05-16 07:35:25.257309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.951 BaseBdev2 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:31.951 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:32.207 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:32.465 [ 00:22:32.465 { 00:22:32.465 "name": "BaseBdev2", 00:22:32.465 "aliases": [ 00:22:32.465 "dc19db90-1356-11ef-8e8f-9dd684e56d79" 00:22:32.465 ], 00:22:32.465 "product_name": "Malloc disk", 00:22:32.465 "block_size": 512, 00:22:32.465 "num_blocks": 65536, 00:22:32.465 "uuid": "dc19db90-1356-11ef-8e8f-9dd684e56d79", 00:22:32.465 "assigned_rate_limits": { 00:22:32.465 "rw_ios_per_sec": 0, 00:22:32.465 "rw_mbytes_per_sec": 0, 00:22:32.465 "r_mbytes_per_sec": 0, 00:22:32.465 "w_mbytes_per_sec": 0 00:22:32.465 }, 00:22:32.465 "claimed": true, 00:22:32.465 "claim_type": "exclusive_write", 00:22:32.465 "zoned": false, 00:22:32.465 "supported_io_types": { 00:22:32.465 "read": true, 00:22:32.465 "write": true, 00:22:32.465 "unmap": true, 00:22:32.465 "write_zeroes": true, 00:22:32.465 "flush": true, 00:22:32.465 "reset": true, 00:22:32.465 "compare": false, 00:22:32.465 "compare_and_write": false, 00:22:32.465 "abort": true, 00:22:32.465 "nvme_admin": false, 00:22:32.465 "nvme_io": false 00:22:32.465 }, 00:22:32.465 "memory_domains": [ 00:22:32.465 { 00:22:32.465 "dma_device_id": "system", 00:22:32.465 "dma_device_type": 1 00:22:32.465 }, 00:22:32.465 { 00:22:32.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.465 "dma_device_type": 2 00:22:32.465 } 00:22:32.465 ], 00:22:32.465 "driver_specific": {} 00:22:32.465 } 00:22:32.465 ] 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.465 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.465 "name": "Existed_Raid", 00:22:32.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.465 "strip_size_kb": 0, 00:22:32.465 "state": "configuring", 00:22:32.465 "raid_level": "raid1", 00:22:32.465 "superblock": false, 00:22:32.465 "num_base_bdevs": 3, 00:22:32.465 "num_base_bdevs_discovered": 2, 00:22:32.465 "num_base_bdevs_operational": 3, 00:22:32.465 "base_bdevs_list": [ 00:22:32.465 { 00:22:32.465 "name": "BaseBdev1", 00:22:32.465 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:32.465 "is_configured": true, 00:22:32.465 "data_offset": 0, 00:22:32.465 "data_size": 65536 00:22:32.465 }, 00:22:32.465 { 00:22:32.465 "name": "BaseBdev2", 00:22:32.465 "uuid": "dc19db90-1356-11ef-8e8f-9dd684e56d79", 00:22:32.465 "is_configured": true, 00:22:32.466 "data_offset": 0, 00:22:32.466 "data_size": 65536 00:22:32.466 }, 00:22:32.466 { 00:22:32.466 "name": "BaseBdev3", 00:22:32.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.466 "is_configured": false, 00:22:32.466 "data_offset": 0, 00:22:32.466 "data_size": 0 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 }' 00:22:32.466 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.466 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:33.031 [2024-05-16 07:35:26.509390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:33.031 [2024-05-16 07:35:26.509416] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd28a00 00:22:33.031 [2024-05-16 07:35:26.509419] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:33.031 [2024-05-16 07:35:26.509437] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd8bec0 00:22:33.031 [2024-05-16 07:35:26.509516] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd28a00 00:22:33.031 [2024-05-16 07:35:26.509519] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bd28a00 00:22:33.031 [2024-05-16 07:35:26.509543] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.031 BaseBdev3 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:33.031 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:33.288 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:33.546 [ 00:22:33.546 { 00:22:33.546 "name": "BaseBdev3", 00:22:33.546 "aliases": [ 00:22:33.546 "dcd8e974-1356-11ef-8e8f-9dd684e56d79" 00:22:33.546 ], 00:22:33.546 "product_name": "Malloc disk", 00:22:33.546 "block_size": 512, 00:22:33.546 "num_blocks": 65536, 00:22:33.546 "uuid": "dcd8e974-1356-11ef-8e8f-9dd684e56d79", 00:22:33.546 "assigned_rate_limits": { 00:22:33.546 "rw_ios_per_sec": 0, 00:22:33.546 "rw_mbytes_per_sec": 0, 00:22:33.546 "r_mbytes_per_sec": 0, 00:22:33.546 "w_mbytes_per_sec": 0 00:22:33.546 }, 00:22:33.546 "claimed": true, 00:22:33.546 "claim_type": "exclusive_write", 00:22:33.546 "zoned": false, 00:22:33.546 "supported_io_types": { 00:22:33.546 "read": true, 00:22:33.546 "write": true, 00:22:33.546 "unmap": true, 00:22:33.546 "write_zeroes": true, 00:22:33.546 "flush": true, 00:22:33.546 "reset": true, 00:22:33.546 "compare": false, 00:22:33.546 "compare_and_write": false, 00:22:33.546 "abort": true, 00:22:33.546 "nvme_admin": false, 00:22:33.546 "nvme_io": false 00:22:33.546 }, 00:22:33.546 "memory_domains": [ 00:22:33.546 { 00:22:33.546 "dma_device_id": "system", 00:22:33.546 "dma_device_type": 1 00:22:33.546 }, 00:22:33.546 { 00:22:33.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.546 "dma_device_type": 2 00:22:33.546 } 00:22:33.546 ], 00:22:33.546 "driver_specific": {} 00:22:33.546 } 00:22:33.546 ] 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.546 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.802 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.802 "name": "Existed_Raid", 00:22:33.802 "uuid": "dcd8ee5f-1356-11ef-8e8f-9dd684e56d79", 00:22:33.802 "strip_size_kb": 0, 00:22:33.802 "state": "online", 00:22:33.802 "raid_level": "raid1", 00:22:33.802 "superblock": false, 00:22:33.802 "num_base_bdevs": 3, 00:22:33.802 "num_base_bdevs_discovered": 3, 00:22:33.802 "num_base_bdevs_operational": 3, 00:22:33.802 "base_bdevs_list": [ 00:22:33.802 { 00:22:33.802 "name": "BaseBdev1", 00:22:33.802 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:33.802 "is_configured": true, 00:22:33.802 "data_offset": 0, 00:22:33.802 "data_size": 65536 00:22:33.802 }, 00:22:33.802 { 00:22:33.802 "name": "BaseBdev2", 00:22:33.802 "uuid": "dc19db90-1356-11ef-8e8f-9dd684e56d79", 00:22:33.802 "is_configured": true, 00:22:33.802 "data_offset": 0, 00:22:33.802 "data_size": 65536 00:22:33.802 }, 00:22:33.802 { 00:22:33.802 "name": "BaseBdev3", 00:22:33.802 "uuid": "dcd8e974-1356-11ef-8e8f-9dd684e56d79", 00:22:33.802 "is_configured": true, 00:22:33.802 "data_offset": 0, 00:22:33.802 "data_size": 65536 00:22:33.802 } 00:22:33.802 ] 00:22:33.802 }' 00:22:33.802 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.802 07:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.059 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:22:34.059 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:34.059 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:34.060 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:34.060 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:34.060 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:22:34.060 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:34.060 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:34.317 [2024-05-16 07:35:27.725395] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.317 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:34.317 "name": "Existed_Raid", 00:22:34.317 "aliases": [ 00:22:34.317 "dcd8ee5f-1356-11ef-8e8f-9dd684e56d79" 00:22:34.317 ], 00:22:34.317 "product_name": "Raid Volume", 00:22:34.317 "block_size": 512, 00:22:34.317 "num_blocks": 65536, 00:22:34.317 "uuid": "dcd8ee5f-1356-11ef-8e8f-9dd684e56d79", 00:22:34.317 "assigned_rate_limits": { 00:22:34.317 "rw_ios_per_sec": 0, 00:22:34.317 "rw_mbytes_per_sec": 0, 00:22:34.317 "r_mbytes_per_sec": 0, 00:22:34.317 "w_mbytes_per_sec": 0 00:22:34.317 }, 00:22:34.317 "claimed": false, 00:22:34.317 "zoned": false, 00:22:34.317 "supported_io_types": { 00:22:34.317 "read": true, 00:22:34.317 "write": true, 00:22:34.317 "unmap": false, 00:22:34.317 "write_zeroes": true, 00:22:34.317 "flush": false, 00:22:34.317 "reset": true, 00:22:34.317 "compare": false, 00:22:34.317 "compare_and_write": false, 00:22:34.317 "abort": false, 00:22:34.317 "nvme_admin": false, 00:22:34.317 "nvme_io": false 00:22:34.317 }, 00:22:34.317 "memory_domains": [ 00:22:34.318 { 00:22:34.318 "dma_device_id": "system", 00:22:34.318 "dma_device_type": 1 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.318 "dma_device_type": 2 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "dma_device_id": "system", 00:22:34.318 "dma_device_type": 1 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.318 "dma_device_type": 2 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "dma_device_id": "system", 00:22:34.318 "dma_device_type": 1 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.318 "dma_device_type": 2 00:22:34.318 } 00:22:34.318 ], 00:22:34.318 "driver_specific": { 00:22:34.318 "raid": { 00:22:34.318 "uuid": "dcd8ee5f-1356-11ef-8e8f-9dd684e56d79", 00:22:34.318 "strip_size_kb": 0, 00:22:34.318 "state": "online", 00:22:34.318 "raid_level": "raid1", 00:22:34.318 "superblock": false, 00:22:34.318 "num_base_bdevs": 3, 00:22:34.318 "num_base_bdevs_discovered": 3, 00:22:34.318 "num_base_bdevs_operational": 3, 00:22:34.318 "base_bdevs_list": [ 00:22:34.318 { 00:22:34.318 "name": "BaseBdev1", 00:22:34.318 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:34.318 "is_configured": true, 00:22:34.318 "data_offset": 0, 00:22:34.318 "data_size": 65536 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "name": "BaseBdev2", 00:22:34.318 "uuid": "dc19db90-1356-11ef-8e8f-9dd684e56d79", 00:22:34.318 "is_configured": true, 00:22:34.318 "data_offset": 0, 00:22:34.318 "data_size": 65536 00:22:34.318 }, 00:22:34.318 { 00:22:34.318 "name": "BaseBdev3", 00:22:34.318 "uuid": "dcd8e974-1356-11ef-8e8f-9dd684e56d79", 00:22:34.318 "is_configured": true, 00:22:34.318 "data_offset": 0, 00:22:34.318 "data_size": 65536 00:22:34.318 } 00:22:34.318 ] 00:22:34.318 } 00:22:34.318 } 00:22:34.318 }' 00:22:34.318 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.318 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:22:34.318 BaseBdev2 00:22:34.318 BaseBdev3' 00:22:34.318 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:34.318 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:34.318 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:34.576 "name": "BaseBdev1", 00:22:34.576 "aliases": [ 00:22:34.576 "dabb7f31-1356-11ef-8e8f-9dd684e56d79" 00:22:34.576 ], 00:22:34.576 "product_name": "Malloc disk", 00:22:34.576 "block_size": 512, 00:22:34.576 "num_blocks": 65536, 00:22:34.576 "uuid": "dabb7f31-1356-11ef-8e8f-9dd684e56d79", 00:22:34.576 "assigned_rate_limits": { 00:22:34.576 "rw_ios_per_sec": 0, 00:22:34.576 "rw_mbytes_per_sec": 0, 00:22:34.576 "r_mbytes_per_sec": 0, 00:22:34.576 "w_mbytes_per_sec": 0 00:22:34.576 }, 00:22:34.576 "claimed": true, 00:22:34.576 "claim_type": "exclusive_write", 00:22:34.576 "zoned": false, 00:22:34.576 "supported_io_types": { 00:22:34.576 "read": true, 00:22:34.576 "write": true, 00:22:34.576 "unmap": true, 00:22:34.576 "write_zeroes": true, 00:22:34.576 "flush": true, 00:22:34.576 "reset": true, 00:22:34.576 "compare": false, 00:22:34.576 "compare_and_write": false, 00:22:34.576 "abort": true, 00:22:34.576 "nvme_admin": false, 00:22:34.576 "nvme_io": false 00:22:34.576 }, 00:22:34.576 "memory_domains": [ 00:22:34.576 { 00:22:34.576 "dma_device_id": "system", 00:22:34.576 "dma_device_type": 1 00:22:34.576 }, 00:22:34.576 { 00:22:34.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.576 "dma_device_type": 2 00:22:34.576 } 00:22:34.576 ], 00:22:34.576 "driver_specific": {} 00:22:34.576 }' 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:34.576 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:34.834 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:34.834 "name": "BaseBdev2", 00:22:34.834 "aliases": [ 00:22:34.834 "dc19db90-1356-11ef-8e8f-9dd684e56d79" 00:22:34.834 ], 00:22:34.834 "product_name": "Malloc disk", 00:22:34.834 "block_size": 512, 00:22:34.834 "num_blocks": 65536, 00:22:34.834 "uuid": "dc19db90-1356-11ef-8e8f-9dd684e56d79", 00:22:34.834 "assigned_rate_limits": { 00:22:34.834 "rw_ios_per_sec": 0, 00:22:34.834 "rw_mbytes_per_sec": 0, 00:22:34.834 "r_mbytes_per_sec": 0, 00:22:34.834 "w_mbytes_per_sec": 0 00:22:34.834 }, 00:22:34.834 "claimed": true, 00:22:34.835 "claim_type": "exclusive_write", 00:22:34.835 "zoned": false, 00:22:34.835 "supported_io_types": { 00:22:34.835 "read": true, 00:22:34.835 "write": true, 00:22:34.835 "unmap": true, 00:22:34.835 "write_zeroes": true, 00:22:34.835 "flush": true, 00:22:34.835 "reset": true, 00:22:34.835 "compare": false, 00:22:34.835 "compare_and_write": false, 00:22:34.835 "abort": true, 00:22:34.835 "nvme_admin": false, 00:22:34.835 "nvme_io": false 00:22:34.835 }, 00:22:34.835 "memory_domains": [ 00:22:34.835 { 00:22:34.835 "dma_device_id": "system", 00:22:34.835 "dma_device_type": 1 00:22:34.835 }, 00:22:34.835 { 00:22:34.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.835 "dma_device_type": 2 00:22:34.835 } 00:22:34.835 ], 00:22:34.835 "driver_specific": {} 00:22:34.835 }' 00:22:34.835 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:35.092 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:35.093 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:35.351 "name": "BaseBdev3", 00:22:35.351 "aliases": [ 00:22:35.351 "dcd8e974-1356-11ef-8e8f-9dd684e56d79" 00:22:35.351 ], 00:22:35.351 "product_name": "Malloc disk", 00:22:35.351 "block_size": 512, 00:22:35.351 "num_blocks": 65536, 00:22:35.351 "uuid": "dcd8e974-1356-11ef-8e8f-9dd684e56d79", 00:22:35.351 "assigned_rate_limits": { 00:22:35.351 "rw_ios_per_sec": 0, 00:22:35.351 "rw_mbytes_per_sec": 0, 00:22:35.351 "r_mbytes_per_sec": 0, 00:22:35.351 "w_mbytes_per_sec": 0 00:22:35.351 }, 00:22:35.351 "claimed": true, 00:22:35.351 "claim_type": "exclusive_write", 00:22:35.351 "zoned": false, 00:22:35.351 "supported_io_types": { 00:22:35.351 "read": true, 00:22:35.351 "write": true, 00:22:35.351 "unmap": true, 00:22:35.351 "write_zeroes": true, 00:22:35.351 "flush": true, 00:22:35.351 "reset": true, 00:22:35.351 "compare": false, 00:22:35.351 "compare_and_write": false, 00:22:35.351 "abort": true, 00:22:35.351 "nvme_admin": false, 00:22:35.351 "nvme_io": false 00:22:35.351 }, 00:22:35.351 "memory_domains": [ 00:22:35.351 { 00:22:35.351 "dma_device_id": "system", 00:22:35.351 "dma_device_type": 1 00:22:35.351 }, 00:22:35.351 { 00:22:35.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.351 "dma_device_type": 2 00:22:35.351 } 00:22:35.351 ], 00:22:35.351 "driver_specific": {} 00:22:35.351 }' 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:35.351 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:35.609 [2024-05-16 07:35:29.033459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.609 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.867 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.867 "name": "Existed_Raid", 00:22:35.867 "uuid": "dcd8ee5f-1356-11ef-8e8f-9dd684e56d79", 00:22:35.867 "strip_size_kb": 0, 00:22:35.867 "state": "online", 00:22:35.867 "raid_level": "raid1", 00:22:35.867 "superblock": false, 00:22:35.867 "num_base_bdevs": 3, 00:22:35.867 "num_base_bdevs_discovered": 2, 00:22:35.867 "num_base_bdevs_operational": 2, 00:22:35.867 "base_bdevs_list": [ 00:22:35.867 { 00:22:35.867 "name": null, 00:22:35.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.867 "is_configured": false, 00:22:35.867 "data_offset": 0, 00:22:35.867 "data_size": 65536 00:22:35.867 }, 00:22:35.867 { 00:22:35.867 "name": "BaseBdev2", 00:22:35.867 "uuid": "dc19db90-1356-11ef-8e8f-9dd684e56d79", 00:22:35.867 "is_configured": true, 00:22:35.867 "data_offset": 0, 00:22:35.867 "data_size": 65536 00:22:35.867 }, 00:22:35.867 { 00:22:35.867 "name": "BaseBdev3", 00:22:35.867 "uuid": "dcd8e974-1356-11ef-8e8f-9dd684e56d79", 00:22:35.867 "is_configured": true, 00:22:35.867 "data_offset": 0, 00:22:35.867 "data_size": 65536 00:22:35.867 } 00:22:35.867 ] 00:22:35.867 }' 00:22:35.867 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.867 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:36.432 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:36.689 [2024-05-16 07:35:30.154322] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:36.689 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.689 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.689 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.689 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:36.946 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:36.946 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:36.946 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:37.205 [2024-05-16 07:35:30.595050] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:37.205 [2024-05-16 07:35:30.595082] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.205 [2024-05-16 07:35:30.599891] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.205 [2024-05-16 07:35:30.599906] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.205 [2024-05-16 07:35:30.599910] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd28a00 name Existed_Raid, state offline 00:22:37.205 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:37.205 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:37.205 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.205 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:22:37.463 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:22:37.463 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:22:37.463 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:22:37.463 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:22:37.463 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:22:37.463 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:37.720 BaseBdev2 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:37.720 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:37.978 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:37.978 [ 00:22:37.978 { 00:22:37.978 "name": "BaseBdev2", 00:22:37.978 "aliases": [ 00:22:37.978 "df8b9d24-1356-11ef-8e8f-9dd684e56d79" 00:22:37.978 ], 00:22:37.978 "product_name": "Malloc disk", 00:22:37.978 "block_size": 512, 00:22:37.978 "num_blocks": 65536, 00:22:37.978 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:37.978 "assigned_rate_limits": { 00:22:37.978 "rw_ios_per_sec": 0, 00:22:37.978 "rw_mbytes_per_sec": 0, 00:22:37.978 "r_mbytes_per_sec": 0, 00:22:37.978 "w_mbytes_per_sec": 0 00:22:37.978 }, 00:22:37.978 "claimed": false, 00:22:37.978 "zoned": false, 00:22:37.978 "supported_io_types": { 00:22:37.978 "read": true, 00:22:37.978 "write": true, 00:22:37.978 "unmap": true, 00:22:37.978 "write_zeroes": true, 00:22:37.978 "flush": true, 00:22:37.978 "reset": true, 00:22:37.978 "compare": false, 00:22:37.978 "compare_and_write": false, 00:22:37.978 "abort": true, 00:22:37.978 "nvme_admin": false, 00:22:37.978 "nvme_io": false 00:22:37.978 }, 00:22:37.978 "memory_domains": [ 00:22:37.978 { 00:22:37.978 "dma_device_id": "system", 00:22:37.978 "dma_device_type": 1 00:22:37.978 }, 00:22:37.978 { 00:22:37.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.978 "dma_device_type": 2 00:22:37.978 } 00:22:37.978 ], 00:22:37.978 "driver_specific": {} 00:22:37.978 } 00:22:37.978 ] 00:22:37.978 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:37.978 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:22:37.978 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:22:37.978 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:38.236 BaseBdev3 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:38.236 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:38.495 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:38.770 [ 00:22:38.770 { 00:22:38.770 "name": "BaseBdev3", 00:22:38.770 "aliases": [ 00:22:38.770 "dff97b4c-1356-11ef-8e8f-9dd684e56d79" 00:22:38.770 ], 00:22:38.770 "product_name": "Malloc disk", 00:22:38.770 "block_size": 512, 00:22:38.770 "num_blocks": 65536, 00:22:38.770 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:38.770 "assigned_rate_limits": { 00:22:38.770 "rw_ios_per_sec": 0, 00:22:38.770 "rw_mbytes_per_sec": 0, 00:22:38.770 "r_mbytes_per_sec": 0, 00:22:38.770 "w_mbytes_per_sec": 0 00:22:38.770 }, 00:22:38.770 "claimed": false, 00:22:38.770 "zoned": false, 00:22:38.770 "supported_io_types": { 00:22:38.770 "read": true, 00:22:38.770 "write": true, 00:22:38.770 "unmap": true, 00:22:38.770 "write_zeroes": true, 00:22:38.770 "flush": true, 00:22:38.770 "reset": true, 00:22:38.770 "compare": false, 00:22:38.770 "compare_and_write": false, 00:22:38.770 "abort": true, 00:22:38.770 "nvme_admin": false, 00:22:38.770 "nvme_io": false 00:22:38.770 }, 00:22:38.770 "memory_domains": [ 00:22:38.770 { 00:22:38.770 "dma_device_id": "system", 00:22:38.770 "dma_device_type": 1 00:22:38.770 }, 00:22:38.770 { 00:22:38.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.770 "dma_device_type": 2 00:22:38.770 } 00:22:38.770 ], 00:22:38.770 "driver_specific": {} 00:22:38.770 } 00:22:38.770 ] 00:22:38.770 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:38.770 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:22:38.770 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:22:38.770 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:39.047 [2024-05-16 07:35:32.475946] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:39.047 [2024-05-16 07:35:32.476012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:39.047 [2024-05-16 07:35:32.476019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.047 [2024-05-16 07:35:32.476419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.047 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.305 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.305 "name": "Existed_Raid", 00:22:39.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.305 "strip_size_kb": 0, 00:22:39.305 "state": "configuring", 00:22:39.305 "raid_level": "raid1", 00:22:39.305 "superblock": false, 00:22:39.305 "num_base_bdevs": 3, 00:22:39.306 "num_base_bdevs_discovered": 2, 00:22:39.306 "num_base_bdevs_operational": 3, 00:22:39.306 "base_bdevs_list": [ 00:22:39.306 { 00:22:39.306 "name": "BaseBdev1", 00:22:39.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.306 "is_configured": false, 00:22:39.306 "data_offset": 0, 00:22:39.306 "data_size": 0 00:22:39.306 }, 00:22:39.306 { 00:22:39.306 "name": "BaseBdev2", 00:22:39.306 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:39.306 "is_configured": true, 00:22:39.306 "data_offset": 0, 00:22:39.306 "data_size": 65536 00:22:39.306 }, 00:22:39.306 { 00:22:39.306 "name": "BaseBdev3", 00:22:39.306 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:39.306 "is_configured": true, 00:22:39.306 "data_offset": 0, 00:22:39.306 "data_size": 65536 00:22:39.306 } 00:22:39.306 ] 00:22:39.306 }' 00:22:39.306 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.306 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.563 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:39.821 [2024-05-16 07:35:33.207960] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.821 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.079 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.079 "name": "Existed_Raid", 00:22:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.079 "strip_size_kb": 0, 00:22:40.079 "state": "configuring", 00:22:40.079 "raid_level": "raid1", 00:22:40.079 "superblock": false, 00:22:40.079 "num_base_bdevs": 3, 00:22:40.079 "num_base_bdevs_discovered": 1, 00:22:40.079 "num_base_bdevs_operational": 3, 00:22:40.079 "base_bdevs_list": [ 00:22:40.079 { 00:22:40.079 "name": "BaseBdev1", 00:22:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.079 "is_configured": false, 00:22:40.079 "data_offset": 0, 00:22:40.079 "data_size": 0 00:22:40.079 }, 00:22:40.079 { 00:22:40.079 "name": null, 00:22:40.079 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:40.079 "is_configured": false, 00:22:40.079 "data_offset": 0, 00:22:40.079 "data_size": 65536 00:22:40.079 }, 00:22:40.079 { 00:22:40.079 "name": "BaseBdev3", 00:22:40.079 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:40.079 "is_configured": true, 00:22:40.079 "data_offset": 0, 00:22:40.079 "data_size": 65536 00:22:40.079 } 00:22:40.079 ] 00:22:40.079 }' 00:22:40.079 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.079 07:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.337 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.337 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:40.595 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:22:40.595 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:40.852 [2024-05-16 07:35:34.164104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.852 BaseBdev1 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:40.853 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:41.110 [ 00:22:41.110 { 00:22:41.110 "name": "BaseBdev1", 00:22:41.110 "aliases": [ 00:22:41.110 "e168edc6-1356-11ef-8e8f-9dd684e56d79" 00:22:41.110 ], 00:22:41.110 "product_name": "Malloc disk", 00:22:41.110 "block_size": 512, 00:22:41.110 "num_blocks": 65536, 00:22:41.110 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:41.110 "assigned_rate_limits": { 00:22:41.110 "rw_ios_per_sec": 0, 00:22:41.110 "rw_mbytes_per_sec": 0, 00:22:41.110 "r_mbytes_per_sec": 0, 00:22:41.110 "w_mbytes_per_sec": 0 00:22:41.110 }, 00:22:41.110 "claimed": true, 00:22:41.110 "claim_type": "exclusive_write", 00:22:41.110 "zoned": false, 00:22:41.110 "supported_io_types": { 00:22:41.110 "read": true, 00:22:41.110 "write": true, 00:22:41.110 "unmap": true, 00:22:41.110 "write_zeroes": true, 00:22:41.110 "flush": true, 00:22:41.110 "reset": true, 00:22:41.110 "compare": false, 00:22:41.110 "compare_and_write": false, 00:22:41.110 "abort": true, 00:22:41.110 "nvme_admin": false, 00:22:41.110 "nvme_io": false 00:22:41.110 }, 00:22:41.110 "memory_domains": [ 00:22:41.110 { 00:22:41.110 "dma_device_id": "system", 00:22:41.110 "dma_device_type": 1 00:22:41.110 }, 00:22:41.110 { 00:22:41.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.110 "dma_device_type": 2 00:22:41.110 } 00:22:41.110 ], 00:22:41.110 "driver_specific": {} 00:22:41.110 } 00:22:41.110 ] 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.110 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.368 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.368 "name": "Existed_Raid", 00:22:41.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.368 "strip_size_kb": 0, 00:22:41.368 "state": "configuring", 00:22:41.368 "raid_level": "raid1", 00:22:41.368 "superblock": false, 00:22:41.368 "num_base_bdevs": 3, 00:22:41.368 "num_base_bdevs_discovered": 2, 00:22:41.368 "num_base_bdevs_operational": 3, 00:22:41.368 "base_bdevs_list": [ 00:22:41.368 { 00:22:41.368 "name": "BaseBdev1", 00:22:41.368 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:41.368 "is_configured": true, 00:22:41.368 "data_offset": 0, 00:22:41.368 "data_size": 65536 00:22:41.368 }, 00:22:41.368 { 00:22:41.368 "name": null, 00:22:41.368 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:41.368 "is_configured": false, 00:22:41.368 "data_offset": 0, 00:22:41.368 "data_size": 65536 00:22:41.368 }, 00:22:41.368 { 00:22:41.368 "name": "BaseBdev3", 00:22:41.368 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:41.368 "is_configured": true, 00:22:41.368 "data_offset": 0, 00:22:41.368 "data_size": 65536 00:22:41.368 } 00:22:41.368 ] 00:22:41.368 }' 00:22:41.368 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.368 07:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.626 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.626 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:42.197 [2024-05-16 07:35:35.688072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.197 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.456 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.456 "name": "Existed_Raid", 00:22:42.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.456 "strip_size_kb": 0, 00:22:42.456 "state": "configuring", 00:22:42.456 "raid_level": "raid1", 00:22:42.456 "superblock": false, 00:22:42.456 "num_base_bdevs": 3, 00:22:42.456 "num_base_bdevs_discovered": 1, 00:22:42.456 "num_base_bdevs_operational": 3, 00:22:42.456 "base_bdevs_list": [ 00:22:42.456 { 00:22:42.456 "name": "BaseBdev1", 00:22:42.456 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:42.456 "is_configured": true, 00:22:42.456 "data_offset": 0, 00:22:42.456 "data_size": 65536 00:22:42.456 }, 00:22:42.456 { 00:22:42.456 "name": null, 00:22:42.456 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:42.456 "is_configured": false, 00:22:42.456 "data_offset": 0, 00:22:42.456 "data_size": 65536 00:22:42.456 }, 00:22:42.456 { 00:22:42.456 "name": null, 00:22:42.456 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:42.456 "is_configured": false, 00:22:42.456 "data_offset": 0, 00:22:42.456 "data_size": 65536 00:22:42.456 } 00:22:42.456 ] 00:22:42.456 }' 00:22:42.456 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.456 07:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.714 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.714 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:43.279 [2024-05-16 07:35:36.720118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.279 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.538 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.538 "name": "Existed_Raid", 00:22:43.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.538 "strip_size_kb": 0, 00:22:43.538 "state": "configuring", 00:22:43.538 "raid_level": "raid1", 00:22:43.538 "superblock": false, 00:22:43.538 "num_base_bdevs": 3, 00:22:43.538 "num_base_bdevs_discovered": 2, 00:22:43.538 "num_base_bdevs_operational": 3, 00:22:43.538 "base_bdevs_list": [ 00:22:43.538 { 00:22:43.538 "name": "BaseBdev1", 00:22:43.538 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:43.538 "is_configured": true, 00:22:43.538 "data_offset": 0, 00:22:43.538 "data_size": 65536 00:22:43.538 }, 00:22:43.538 { 00:22:43.538 "name": null, 00:22:43.538 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:43.538 "is_configured": false, 00:22:43.538 "data_offset": 0, 00:22:43.538 "data_size": 65536 00:22:43.538 }, 00:22:43.538 { 00:22:43.538 "name": "BaseBdev3", 00:22:43.538 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:43.538 "is_configured": true, 00:22:43.538 "data_offset": 0, 00:22:43.538 "data_size": 65536 00:22:43.538 } 00:22:43.538 ] 00:22:43.538 }' 00:22:43.538 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.538 07:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.796 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.796 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:44.054 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:22:44.054 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:44.311 [2024-05-16 07:35:37.760169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.311 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.568 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.569 "name": "Existed_Raid", 00:22:44.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.569 "strip_size_kb": 0, 00:22:44.569 "state": "configuring", 00:22:44.569 "raid_level": "raid1", 00:22:44.569 "superblock": false, 00:22:44.569 "num_base_bdevs": 3, 00:22:44.569 "num_base_bdevs_discovered": 1, 00:22:44.569 "num_base_bdevs_operational": 3, 00:22:44.569 "base_bdevs_list": [ 00:22:44.569 { 00:22:44.569 "name": null, 00:22:44.569 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:44.569 "is_configured": false, 00:22:44.569 "data_offset": 0, 00:22:44.569 "data_size": 65536 00:22:44.569 }, 00:22:44.569 { 00:22:44.569 "name": null, 00:22:44.569 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:44.569 "is_configured": false, 00:22:44.569 "data_offset": 0, 00:22:44.569 "data_size": 65536 00:22:44.569 }, 00:22:44.569 { 00:22:44.569 "name": "BaseBdev3", 00:22:44.569 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:44.569 "is_configured": true, 00:22:44.569 "data_offset": 0, 00:22:44.569 "data_size": 65536 00:22:44.569 } 00:22:44.569 ] 00:22:44.569 }' 00:22:44.569 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.569 07:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.826 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.826 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:45.084 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:22:45.084 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:45.349 [2024-05-16 07:35:38.864917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.349 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.618 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.618 "name": "Existed_Raid", 00:22:45.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.618 "strip_size_kb": 0, 00:22:45.618 "state": "configuring", 00:22:45.618 "raid_level": "raid1", 00:22:45.618 "superblock": false, 00:22:45.618 "num_base_bdevs": 3, 00:22:45.618 "num_base_bdevs_discovered": 2, 00:22:45.618 "num_base_bdevs_operational": 3, 00:22:45.618 "base_bdevs_list": [ 00:22:45.618 { 00:22:45.618 "name": null, 00:22:45.618 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:45.618 "is_configured": false, 00:22:45.618 "data_offset": 0, 00:22:45.618 "data_size": 65536 00:22:45.618 }, 00:22:45.618 { 00:22:45.618 "name": "BaseBdev2", 00:22:45.618 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:45.618 "is_configured": true, 00:22:45.618 "data_offset": 0, 00:22:45.618 "data_size": 65536 00:22:45.618 }, 00:22:45.618 { 00:22:45.618 "name": "BaseBdev3", 00:22:45.618 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:45.618 "is_configured": true, 00:22:45.618 "data_offset": 0, 00:22:45.618 "data_size": 65536 00:22:45.618 } 00:22:45.618 ] 00:22:45.618 }' 00:22:45.618 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.618 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.875 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.876 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:46.134 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:22:46.134 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:46.134 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.392 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e168edc6-1356-11ef-8e8f-9dd684e56d79 00:22:46.651 [2024-05-16 07:35:40.073118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:46.651 [2024-05-16 07:35:40.073141] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bd28f00 00:22:46.651 [2024-05-16 07:35:40.073145] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:46.651 [2024-05-16 07:35:40.073164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd8be20 00:22:46.651 [2024-05-16 07:35:40.073216] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bd28f00 00:22:46.651 [2024-05-16 07:35:40.073220] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bd28f00 00:22:46.651 [2024-05-16 07:35:40.073246] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.651 NewBaseBdev 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:46.651 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.909 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:47.168 [ 00:22:47.168 { 00:22:47.168 "name": "NewBaseBdev", 00:22:47.168 "aliases": [ 00:22:47.168 "e168edc6-1356-11ef-8e8f-9dd684e56d79" 00:22:47.168 ], 00:22:47.168 "product_name": "Malloc disk", 00:22:47.168 "block_size": 512, 00:22:47.168 "num_blocks": 65536, 00:22:47.168 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:47.168 "assigned_rate_limits": { 00:22:47.168 "rw_ios_per_sec": 0, 00:22:47.168 "rw_mbytes_per_sec": 0, 00:22:47.168 "r_mbytes_per_sec": 0, 00:22:47.168 "w_mbytes_per_sec": 0 00:22:47.168 }, 00:22:47.168 "claimed": true, 00:22:47.168 "claim_type": "exclusive_write", 00:22:47.168 "zoned": false, 00:22:47.168 "supported_io_types": { 00:22:47.168 "read": true, 00:22:47.168 "write": true, 00:22:47.168 "unmap": true, 00:22:47.168 "write_zeroes": true, 00:22:47.168 "flush": true, 00:22:47.168 "reset": true, 00:22:47.168 "compare": false, 00:22:47.168 "compare_and_write": false, 00:22:47.168 "abort": true, 00:22:47.168 "nvme_admin": false, 00:22:47.168 "nvme_io": false 00:22:47.168 }, 00:22:47.168 "memory_domains": [ 00:22:47.168 { 00:22:47.168 "dma_device_id": "system", 00:22:47.168 "dma_device_type": 1 00:22:47.168 }, 00:22:47.168 { 00:22:47.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.168 "dma_device_type": 2 00:22:47.168 } 00:22:47.168 ], 00:22:47.168 "driver_specific": {} 00:22:47.168 } 00:22:47.168 ] 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.168 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.427 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:47.427 "name": "Existed_Raid", 00:22:47.427 "uuid": "e4ee976e-1356-11ef-8e8f-9dd684e56d79", 00:22:47.427 "strip_size_kb": 0, 00:22:47.427 "state": "online", 00:22:47.427 "raid_level": "raid1", 00:22:47.427 "superblock": false, 00:22:47.427 "num_base_bdevs": 3, 00:22:47.427 "num_base_bdevs_discovered": 3, 00:22:47.427 "num_base_bdevs_operational": 3, 00:22:47.427 "base_bdevs_list": [ 00:22:47.427 { 00:22:47.427 "name": "NewBaseBdev", 00:22:47.427 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:47.427 "is_configured": true, 00:22:47.427 "data_offset": 0, 00:22:47.427 "data_size": 65536 00:22:47.427 }, 00:22:47.427 { 00:22:47.427 "name": "BaseBdev2", 00:22:47.427 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:47.427 "is_configured": true, 00:22:47.427 "data_offset": 0, 00:22:47.427 "data_size": 65536 00:22:47.427 }, 00:22:47.427 { 00:22:47.427 "name": "BaseBdev3", 00:22:47.427 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:47.427 "is_configured": true, 00:22:47.427 "data_offset": 0, 00:22:47.427 "data_size": 65536 00:22:47.427 } 00:22:47.427 ] 00:22:47.427 }' 00:22:47.427 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:47.427 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:47.685 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:47.944 [2024-05-16 07:35:41.357144] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:47.944 "name": "Existed_Raid", 00:22:47.944 "aliases": [ 00:22:47.944 "e4ee976e-1356-11ef-8e8f-9dd684e56d79" 00:22:47.944 ], 00:22:47.944 "product_name": "Raid Volume", 00:22:47.944 "block_size": 512, 00:22:47.944 "num_blocks": 65536, 00:22:47.944 "uuid": "e4ee976e-1356-11ef-8e8f-9dd684e56d79", 00:22:47.944 "assigned_rate_limits": { 00:22:47.944 "rw_ios_per_sec": 0, 00:22:47.944 "rw_mbytes_per_sec": 0, 00:22:47.944 "r_mbytes_per_sec": 0, 00:22:47.944 "w_mbytes_per_sec": 0 00:22:47.944 }, 00:22:47.944 "claimed": false, 00:22:47.944 "zoned": false, 00:22:47.944 "supported_io_types": { 00:22:47.944 "read": true, 00:22:47.944 "write": true, 00:22:47.944 "unmap": false, 00:22:47.944 "write_zeroes": true, 00:22:47.944 "flush": false, 00:22:47.944 "reset": true, 00:22:47.944 "compare": false, 00:22:47.944 "compare_and_write": false, 00:22:47.944 "abort": false, 00:22:47.944 "nvme_admin": false, 00:22:47.944 "nvme_io": false 00:22:47.944 }, 00:22:47.944 "memory_domains": [ 00:22:47.944 { 00:22:47.944 "dma_device_id": "system", 00:22:47.944 "dma_device_type": 1 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.944 "dma_device_type": 2 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "dma_device_id": "system", 00:22:47.944 "dma_device_type": 1 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.944 "dma_device_type": 2 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "dma_device_id": "system", 00:22:47.944 "dma_device_type": 1 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.944 "dma_device_type": 2 00:22:47.944 } 00:22:47.944 ], 00:22:47.944 "driver_specific": { 00:22:47.944 "raid": { 00:22:47.944 "uuid": "e4ee976e-1356-11ef-8e8f-9dd684e56d79", 00:22:47.944 "strip_size_kb": 0, 00:22:47.944 "state": "online", 00:22:47.944 "raid_level": "raid1", 00:22:47.944 "superblock": false, 00:22:47.944 "num_base_bdevs": 3, 00:22:47.944 "num_base_bdevs_discovered": 3, 00:22:47.944 "num_base_bdevs_operational": 3, 00:22:47.944 "base_bdevs_list": [ 00:22:47.944 { 00:22:47.944 "name": "NewBaseBdev", 00:22:47.944 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:47.944 "is_configured": true, 00:22:47.944 "data_offset": 0, 00:22:47.944 "data_size": 65536 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "name": "BaseBdev2", 00:22:47.944 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:47.944 "is_configured": true, 00:22:47.944 "data_offset": 0, 00:22:47.944 "data_size": 65536 00:22:47.944 }, 00:22:47.944 { 00:22:47.944 "name": "BaseBdev3", 00:22:47.944 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:47.944 "is_configured": true, 00:22:47.944 "data_offset": 0, 00:22:47.944 "data_size": 65536 00:22:47.944 } 00:22:47.944 ] 00:22:47.944 } 00:22:47.944 } 00:22:47.944 }' 00:22:47.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:47.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:22:47.944 BaseBdev2 00:22:47.944 BaseBdev3' 00:22:47.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:47.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:47.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:48.204 "name": "NewBaseBdev", 00:22:48.204 "aliases": [ 00:22:48.204 "e168edc6-1356-11ef-8e8f-9dd684e56d79" 00:22:48.204 ], 00:22:48.204 "product_name": "Malloc disk", 00:22:48.204 "block_size": 512, 00:22:48.204 "num_blocks": 65536, 00:22:48.204 "uuid": "e168edc6-1356-11ef-8e8f-9dd684e56d79", 00:22:48.204 "assigned_rate_limits": { 00:22:48.204 "rw_ios_per_sec": 0, 00:22:48.204 "rw_mbytes_per_sec": 0, 00:22:48.204 "r_mbytes_per_sec": 0, 00:22:48.204 "w_mbytes_per_sec": 0 00:22:48.204 }, 00:22:48.204 "claimed": true, 00:22:48.204 "claim_type": "exclusive_write", 00:22:48.204 "zoned": false, 00:22:48.204 "supported_io_types": { 00:22:48.204 "read": true, 00:22:48.204 "write": true, 00:22:48.204 "unmap": true, 00:22:48.204 "write_zeroes": true, 00:22:48.204 "flush": true, 00:22:48.204 "reset": true, 00:22:48.204 "compare": false, 00:22:48.204 "compare_and_write": false, 00:22:48.204 "abort": true, 00:22:48.204 "nvme_admin": false, 00:22:48.204 "nvme_io": false 00:22:48.204 }, 00:22:48.204 "memory_domains": [ 00:22:48.204 { 00:22:48.204 "dma_device_id": "system", 00:22:48.204 "dma_device_type": 1 00:22:48.204 }, 00:22:48.204 { 00:22:48.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.204 "dma_device_type": 2 00:22:48.204 } 00:22:48.204 ], 00:22:48.204 "driver_specific": {} 00:22:48.204 }' 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:48.204 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:48.463 "name": "BaseBdev2", 00:22:48.463 "aliases": [ 00:22:48.463 "df8b9d24-1356-11ef-8e8f-9dd684e56d79" 00:22:48.463 ], 00:22:48.463 "product_name": "Malloc disk", 00:22:48.463 "block_size": 512, 00:22:48.463 "num_blocks": 65536, 00:22:48.463 "uuid": "df8b9d24-1356-11ef-8e8f-9dd684e56d79", 00:22:48.463 "assigned_rate_limits": { 00:22:48.463 "rw_ios_per_sec": 0, 00:22:48.463 "rw_mbytes_per_sec": 0, 00:22:48.463 "r_mbytes_per_sec": 0, 00:22:48.463 "w_mbytes_per_sec": 0 00:22:48.463 }, 00:22:48.463 "claimed": true, 00:22:48.463 "claim_type": "exclusive_write", 00:22:48.463 "zoned": false, 00:22:48.463 "supported_io_types": { 00:22:48.463 "read": true, 00:22:48.463 "write": true, 00:22:48.463 "unmap": true, 00:22:48.463 "write_zeroes": true, 00:22:48.463 "flush": true, 00:22:48.463 "reset": true, 00:22:48.463 "compare": false, 00:22:48.463 "compare_and_write": false, 00:22:48.463 "abort": true, 00:22:48.463 "nvme_admin": false, 00:22:48.463 "nvme_io": false 00:22:48.463 }, 00:22:48.463 "memory_domains": [ 00:22:48.463 { 00:22:48.463 "dma_device_id": "system", 00:22:48.463 "dma_device_type": 1 00:22:48.463 }, 00:22:48.463 { 00:22:48.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.463 "dma_device_type": 2 00:22:48.463 } 00:22:48.463 ], 00:22:48.463 "driver_specific": {} 00:22:48.463 }' 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:48.463 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:48.463 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:48.463 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:48.463 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:48.722 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:48.722 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:48.722 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:48.722 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:48.981 "name": "BaseBdev3", 00:22:48.981 "aliases": [ 00:22:48.981 "dff97b4c-1356-11ef-8e8f-9dd684e56d79" 00:22:48.981 ], 00:22:48.981 "product_name": "Malloc disk", 00:22:48.981 "block_size": 512, 00:22:48.981 "num_blocks": 65536, 00:22:48.981 "uuid": "dff97b4c-1356-11ef-8e8f-9dd684e56d79", 00:22:48.981 "assigned_rate_limits": { 00:22:48.981 "rw_ios_per_sec": 0, 00:22:48.981 "rw_mbytes_per_sec": 0, 00:22:48.981 "r_mbytes_per_sec": 0, 00:22:48.981 "w_mbytes_per_sec": 0 00:22:48.981 }, 00:22:48.981 "claimed": true, 00:22:48.981 "claim_type": "exclusive_write", 00:22:48.981 "zoned": false, 00:22:48.981 "supported_io_types": { 00:22:48.981 "read": true, 00:22:48.981 "write": true, 00:22:48.981 "unmap": true, 00:22:48.981 "write_zeroes": true, 00:22:48.981 "flush": true, 00:22:48.981 "reset": true, 00:22:48.981 "compare": false, 00:22:48.981 "compare_and_write": false, 00:22:48.981 "abort": true, 00:22:48.981 "nvme_admin": false, 00:22:48.981 "nvme_io": false 00:22:48.981 }, 00:22:48.981 "memory_domains": [ 00:22:48.981 { 00:22:48.981 "dma_device_id": "system", 00:22:48.981 "dma_device_type": 1 00:22:48.981 }, 00:22:48.981 { 00:22:48.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.981 "dma_device_type": 2 00:22:48.981 } 00:22:48.981 ], 00:22:48.981 "driver_specific": {} 00:22:48.981 }' 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:48.981 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:49.264 [2024-05-16 07:35:42.653188] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:49.264 [2024-05-16 07:35:42.653210] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.264 [2024-05-16 07:35:42.653224] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.264 [2024-05-16 07:35:42.653285] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.264 [2024-05-16 07:35:42.653289] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bd28f00 name Existed_Raid, state offline 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 55653 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 55653 ']' 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 55653 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 55653 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:22:49.264 killing process with pid 55653 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55653' 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 55653 00:22:49.264 [2024-05-16 07:35:42.683614] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.264 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 55653 00:22:49.264 [2024-05-16 07:35:42.697645] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:49.522 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:22:49.522 00:22:49.522 real 0m22.292s 00:22:49.522 user 0m40.664s 00:22:49.522 sys 0m3.209s 00:22:49.522 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:49.522 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.522 ************************************ 00:22:49.522 END TEST raid_state_function_test 00:22:49.522 ************************************ 00:22:49.522 07:35:42 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:22:49.522 07:35:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:49.522 07:35:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:49.522 07:35:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:49.522 ************************************ 00:22:49.523 START TEST raid_state_function_test_sb 00:22:49.523 ************************************ 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=3 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=56374 00:22:49.523 Process raid pid: 56374 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 56374' 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 56374 /var/tmp/spdk-raid.sock 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 56374 ']' 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:49.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.523 07:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.523 [2024-05-16 07:35:42.924452] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:49.523 [2024-05-16 07:35:42.924591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:22:50.090 EAL: TSC is not safe to use in SMP mode 00:22:50.090 EAL: TSC is not invariant 00:22:50.090 [2024-05-16 07:35:43.368604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.090 [2024-05-16 07:35:43.450211] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:22:50.090 [2024-05-16 07:35:43.452328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.090 [2024-05-16 07:35:43.453054] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:50.090 [2024-05-16 07:35:43.453067] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:50.663 07:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.663 07:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:22:50.663 07:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:50.663 [2024-05-16 07:35:44.155818] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:50.663 [2024-05-16 07:35:44.155897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:50.663 [2024-05-16 07:35:44.155902] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:50.663 [2024-05-16 07:35:44.155910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:50.663 [2024-05-16 07:35:44.155920] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:50.663 [2024-05-16 07:35:44.155928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.663 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.922 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:50.922 "name": "Existed_Raid", 00:22:50.922 "uuid": "e75d8e6c-1356-11ef-8e8f-9dd684e56d79", 00:22:50.922 "strip_size_kb": 0, 00:22:50.922 "state": "configuring", 00:22:50.922 "raid_level": "raid1", 00:22:50.922 "superblock": true, 00:22:50.922 "num_base_bdevs": 3, 00:22:50.922 "num_base_bdevs_discovered": 0, 00:22:50.922 "num_base_bdevs_operational": 3, 00:22:50.922 "base_bdevs_list": [ 00:22:50.922 { 00:22:50.922 "name": "BaseBdev1", 00:22:50.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.922 "is_configured": false, 00:22:50.922 "data_offset": 0, 00:22:50.922 "data_size": 0 00:22:50.922 }, 00:22:50.922 { 00:22:50.922 "name": "BaseBdev2", 00:22:50.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.922 "is_configured": false, 00:22:50.922 "data_offset": 0, 00:22:50.922 "data_size": 0 00:22:50.922 }, 00:22:50.922 { 00:22:50.922 "name": "BaseBdev3", 00:22:50.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.922 "is_configured": false, 00:22:50.922 "data_offset": 0, 00:22:50.922 "data_size": 0 00:22:50.922 } 00:22:50.922 ] 00:22:50.922 }' 00:22:50.922 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:50.922 07:35:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.181 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:51.439 [2024-05-16 07:35:44.863811] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:51.439 [2024-05-16 07:35:44.863834] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a52f500 name Existed_Raid, state configuring 00:22:51.439 07:35:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:51.699 [2024-05-16 07:35:45.123835] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.699 [2024-05-16 07:35:45.123895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.699 [2024-05-16 07:35:45.123899] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:51.699 [2024-05-16 07:35:45.123907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:51.699 [2024-05-16 07:35:45.123910] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:51.699 [2024-05-16 07:35:45.123916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:51.699 07:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:51.958 [2024-05-16 07:35:45.452744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.958 BaseBdev1 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:51.958 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.217 07:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:52.475 [ 00:22:52.475 { 00:22:52.475 "name": "BaseBdev1", 00:22:52.475 "aliases": [ 00:22:52.475 "e823507e-1356-11ef-8e8f-9dd684e56d79" 00:22:52.475 ], 00:22:52.475 "product_name": "Malloc disk", 00:22:52.475 "block_size": 512, 00:22:52.475 "num_blocks": 65536, 00:22:52.475 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:52.475 "assigned_rate_limits": { 00:22:52.475 "rw_ios_per_sec": 0, 00:22:52.475 "rw_mbytes_per_sec": 0, 00:22:52.475 "r_mbytes_per_sec": 0, 00:22:52.475 "w_mbytes_per_sec": 0 00:22:52.475 }, 00:22:52.475 "claimed": true, 00:22:52.475 "claim_type": "exclusive_write", 00:22:52.475 "zoned": false, 00:22:52.475 "supported_io_types": { 00:22:52.475 "read": true, 00:22:52.475 "write": true, 00:22:52.475 "unmap": true, 00:22:52.475 "write_zeroes": true, 00:22:52.475 "flush": true, 00:22:52.475 "reset": true, 00:22:52.475 "compare": false, 00:22:52.475 "compare_and_write": false, 00:22:52.475 "abort": true, 00:22:52.475 "nvme_admin": false, 00:22:52.475 "nvme_io": false 00:22:52.475 }, 00:22:52.475 "memory_domains": [ 00:22:52.475 { 00:22:52.475 "dma_device_id": "system", 00:22:52.475 "dma_device_type": 1 00:22:52.475 }, 00:22:52.475 { 00:22:52.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.475 "dma_device_type": 2 00:22:52.475 } 00:22:52.475 ], 00:22:52.475 "driver_specific": {} 00:22:52.475 } 00:22:52.475 ] 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.734 "name": "Existed_Raid", 00:22:52.734 "uuid": "e7f1439f-1356-11ef-8e8f-9dd684e56d79", 00:22:52.734 "strip_size_kb": 0, 00:22:52.734 "state": "configuring", 00:22:52.734 "raid_level": "raid1", 00:22:52.734 "superblock": true, 00:22:52.734 "num_base_bdevs": 3, 00:22:52.734 "num_base_bdevs_discovered": 1, 00:22:52.734 "num_base_bdevs_operational": 3, 00:22:52.734 "base_bdevs_list": [ 00:22:52.734 { 00:22:52.734 "name": "BaseBdev1", 00:22:52.734 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:52.734 "is_configured": true, 00:22:52.734 "data_offset": 2048, 00:22:52.734 "data_size": 63488 00:22:52.734 }, 00:22:52.734 { 00:22:52.734 "name": "BaseBdev2", 00:22:52.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.734 "is_configured": false, 00:22:52.734 "data_offset": 0, 00:22:52.734 "data_size": 0 00:22:52.734 }, 00:22:52.734 { 00:22:52.734 "name": "BaseBdev3", 00:22:52.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.734 "is_configured": false, 00:22:52.734 "data_offset": 0, 00:22:52.734 "data_size": 0 00:22:52.734 } 00:22:52.734 ] 00:22:52.734 }' 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.734 07:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.300 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:53.300 [2024-05-16 07:35:46.851880] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:53.300 [2024-05-16 07:35:46.851907] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a52f500 name Existed_Raid, state configuring 00:22:53.558 07:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:53.821 [2024-05-16 07:35:47.131942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:53.821 [2024-05-16 07:35:47.132610] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:53.821 [2024-05-16 07:35:47.132651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:53.821 [2024-05-16 07:35:47.132656] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:53.822 [2024-05-16 07:35:47.132664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.822 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.083 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.083 "name": "Existed_Raid", 00:22:54.083 "uuid": "e923ad3b-1356-11ef-8e8f-9dd684e56d79", 00:22:54.083 "strip_size_kb": 0, 00:22:54.083 "state": "configuring", 00:22:54.083 "raid_level": "raid1", 00:22:54.083 "superblock": true, 00:22:54.083 "num_base_bdevs": 3, 00:22:54.083 "num_base_bdevs_discovered": 1, 00:22:54.083 "num_base_bdevs_operational": 3, 00:22:54.083 "base_bdevs_list": [ 00:22:54.083 { 00:22:54.083 "name": "BaseBdev1", 00:22:54.083 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:54.083 "is_configured": true, 00:22:54.083 "data_offset": 2048, 00:22:54.083 "data_size": 63488 00:22:54.083 }, 00:22:54.083 { 00:22:54.083 "name": "BaseBdev2", 00:22:54.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.083 "is_configured": false, 00:22:54.083 "data_offset": 0, 00:22:54.083 "data_size": 0 00:22:54.083 }, 00:22:54.083 { 00:22:54.083 "name": "BaseBdev3", 00:22:54.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.083 "is_configured": false, 00:22:54.083 "data_offset": 0, 00:22:54.083 "data_size": 0 00:22:54.083 } 00:22:54.083 ] 00:22:54.083 }' 00:22:54.083 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.083 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.341 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:54.603 [2024-05-16 07:35:47.972069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.603 BaseBdev2 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:54.603 07:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:54.871 07:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:55.130 [ 00:22:55.130 { 00:22:55.130 "name": "BaseBdev2", 00:22:55.130 "aliases": [ 00:22:55.130 "e9a3dad9-1356-11ef-8e8f-9dd684e56d79" 00:22:55.130 ], 00:22:55.130 "product_name": "Malloc disk", 00:22:55.130 "block_size": 512, 00:22:55.130 "num_blocks": 65536, 00:22:55.130 "uuid": "e9a3dad9-1356-11ef-8e8f-9dd684e56d79", 00:22:55.130 "assigned_rate_limits": { 00:22:55.130 "rw_ios_per_sec": 0, 00:22:55.130 "rw_mbytes_per_sec": 0, 00:22:55.130 "r_mbytes_per_sec": 0, 00:22:55.130 "w_mbytes_per_sec": 0 00:22:55.130 }, 00:22:55.130 "claimed": true, 00:22:55.130 "claim_type": "exclusive_write", 00:22:55.130 "zoned": false, 00:22:55.130 "supported_io_types": { 00:22:55.130 "read": true, 00:22:55.130 "write": true, 00:22:55.130 "unmap": true, 00:22:55.130 "write_zeroes": true, 00:22:55.130 "flush": true, 00:22:55.130 "reset": true, 00:22:55.130 "compare": false, 00:22:55.130 "compare_and_write": false, 00:22:55.130 "abort": true, 00:22:55.130 "nvme_admin": false, 00:22:55.130 "nvme_io": false 00:22:55.130 }, 00:22:55.130 "memory_domains": [ 00:22:55.130 { 00:22:55.130 "dma_device_id": "system", 00:22:55.130 "dma_device_type": 1 00:22:55.130 }, 00:22:55.130 { 00:22:55.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.130 "dma_device_type": 2 00:22:55.130 } 00:22:55.130 ], 00:22:55.130 "driver_specific": {} 00:22:55.130 } 00:22:55.130 ] 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.130 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.389 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.389 "name": "Existed_Raid", 00:22:55.389 "uuid": "e923ad3b-1356-11ef-8e8f-9dd684e56d79", 00:22:55.389 "strip_size_kb": 0, 00:22:55.389 "state": "configuring", 00:22:55.389 "raid_level": "raid1", 00:22:55.389 "superblock": true, 00:22:55.389 "num_base_bdevs": 3, 00:22:55.389 "num_base_bdevs_discovered": 2, 00:22:55.389 "num_base_bdevs_operational": 3, 00:22:55.389 "base_bdevs_list": [ 00:22:55.389 { 00:22:55.389 "name": "BaseBdev1", 00:22:55.389 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:55.389 "is_configured": true, 00:22:55.389 "data_offset": 2048, 00:22:55.389 "data_size": 63488 00:22:55.389 }, 00:22:55.389 { 00:22:55.389 "name": "BaseBdev2", 00:22:55.389 "uuid": "e9a3dad9-1356-11ef-8e8f-9dd684e56d79", 00:22:55.389 "is_configured": true, 00:22:55.389 "data_offset": 2048, 00:22:55.389 "data_size": 63488 00:22:55.389 }, 00:22:55.389 { 00:22:55.389 "name": "BaseBdev3", 00:22:55.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.389 "is_configured": false, 00:22:55.389 "data_offset": 0, 00:22:55.389 "data_size": 0 00:22:55.389 } 00:22:55.389 ] 00:22:55.389 }' 00:22:55.389 07:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.389 07:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.669 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:55.927 [2024-05-16 07:35:49.328067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:55.927 [2024-05-16 07:35:49.328118] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a52fa00 00:22:55.927 [2024-05-16 07:35:49.328123] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:55.927 [2024-05-16 07:35:49.328140] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a592ec0 00:22:55.927 [2024-05-16 07:35:49.328177] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a52fa00 00:22:55.927 [2024-05-16 07:35:49.328181] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a52fa00 00:22:55.927 [2024-05-16 07:35:49.328196] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.927 BaseBdev3 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:55.927 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.185 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:56.443 [ 00:22:56.443 { 00:22:56.443 "name": "BaseBdev3", 00:22:56.443 "aliases": [ 00:22:56.443 "ea72c496-1356-11ef-8e8f-9dd684e56d79" 00:22:56.443 ], 00:22:56.443 "product_name": "Malloc disk", 00:22:56.443 "block_size": 512, 00:22:56.443 "num_blocks": 65536, 00:22:56.443 "uuid": "ea72c496-1356-11ef-8e8f-9dd684e56d79", 00:22:56.443 "assigned_rate_limits": { 00:22:56.443 "rw_ios_per_sec": 0, 00:22:56.443 "rw_mbytes_per_sec": 0, 00:22:56.443 "r_mbytes_per_sec": 0, 00:22:56.443 "w_mbytes_per_sec": 0 00:22:56.443 }, 00:22:56.443 "claimed": true, 00:22:56.443 "claim_type": "exclusive_write", 00:22:56.443 "zoned": false, 00:22:56.443 "supported_io_types": { 00:22:56.443 "read": true, 00:22:56.443 "write": true, 00:22:56.443 "unmap": true, 00:22:56.443 "write_zeroes": true, 00:22:56.443 "flush": true, 00:22:56.443 "reset": true, 00:22:56.443 "compare": false, 00:22:56.443 "compare_and_write": false, 00:22:56.443 "abort": true, 00:22:56.443 "nvme_admin": false, 00:22:56.443 "nvme_io": false 00:22:56.443 }, 00:22:56.443 "memory_domains": [ 00:22:56.443 { 00:22:56.443 "dma_device_id": "system", 00:22:56.443 "dma_device_type": 1 00:22:56.443 }, 00:22:56.443 { 00:22:56.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.443 "dma_device_type": 2 00:22:56.443 } 00:22:56.443 ], 00:22:56.443 "driver_specific": {} 00:22:56.443 } 00:22:56.443 ] 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.443 07:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.705 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:56.705 "name": "Existed_Raid", 00:22:56.705 "uuid": "e923ad3b-1356-11ef-8e8f-9dd684e56d79", 00:22:56.705 "strip_size_kb": 0, 00:22:56.705 "state": "online", 00:22:56.705 "raid_level": "raid1", 00:22:56.705 "superblock": true, 00:22:56.705 "num_base_bdevs": 3, 00:22:56.705 "num_base_bdevs_discovered": 3, 00:22:56.705 "num_base_bdevs_operational": 3, 00:22:56.705 "base_bdevs_list": [ 00:22:56.705 { 00:22:56.705 "name": "BaseBdev1", 00:22:56.705 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:56.705 "is_configured": true, 00:22:56.705 "data_offset": 2048, 00:22:56.705 "data_size": 63488 00:22:56.705 }, 00:22:56.705 { 00:22:56.705 "name": "BaseBdev2", 00:22:56.705 "uuid": "e9a3dad9-1356-11ef-8e8f-9dd684e56d79", 00:22:56.705 "is_configured": true, 00:22:56.705 "data_offset": 2048, 00:22:56.705 "data_size": 63488 00:22:56.705 }, 00:22:56.705 { 00:22:56.705 "name": "BaseBdev3", 00:22:56.705 "uuid": "ea72c496-1356-11ef-8e8f-9dd684e56d79", 00:22:56.705 "is_configured": true, 00:22:56.705 "data_offset": 2048, 00:22:56.705 "data_size": 63488 00:22:56.705 } 00:22:56.705 ] 00:22:56.705 }' 00:22:56.705 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:56.705 07:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:22:57.273 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:57.531 [2024-05-16 07:35:50.884059] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.531 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:22:57.531 "name": "Existed_Raid", 00:22:57.531 "aliases": [ 00:22:57.531 "e923ad3b-1356-11ef-8e8f-9dd684e56d79" 00:22:57.531 ], 00:22:57.531 "product_name": "Raid Volume", 00:22:57.531 "block_size": 512, 00:22:57.531 "num_blocks": 63488, 00:22:57.531 "uuid": "e923ad3b-1356-11ef-8e8f-9dd684e56d79", 00:22:57.531 "assigned_rate_limits": { 00:22:57.531 "rw_ios_per_sec": 0, 00:22:57.531 "rw_mbytes_per_sec": 0, 00:22:57.531 "r_mbytes_per_sec": 0, 00:22:57.531 "w_mbytes_per_sec": 0 00:22:57.531 }, 00:22:57.531 "claimed": false, 00:22:57.532 "zoned": false, 00:22:57.532 "supported_io_types": { 00:22:57.532 "read": true, 00:22:57.532 "write": true, 00:22:57.532 "unmap": false, 00:22:57.532 "write_zeroes": true, 00:22:57.532 "flush": false, 00:22:57.532 "reset": true, 00:22:57.532 "compare": false, 00:22:57.532 "compare_and_write": false, 00:22:57.532 "abort": false, 00:22:57.532 "nvme_admin": false, 00:22:57.532 "nvme_io": false 00:22:57.532 }, 00:22:57.532 "memory_domains": [ 00:22:57.532 { 00:22:57.532 "dma_device_id": "system", 00:22:57.532 "dma_device_type": 1 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.532 "dma_device_type": 2 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "dma_device_id": "system", 00:22:57.532 "dma_device_type": 1 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.532 "dma_device_type": 2 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "dma_device_id": "system", 00:22:57.532 "dma_device_type": 1 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.532 "dma_device_type": 2 00:22:57.532 } 00:22:57.532 ], 00:22:57.532 "driver_specific": { 00:22:57.532 "raid": { 00:22:57.532 "uuid": "e923ad3b-1356-11ef-8e8f-9dd684e56d79", 00:22:57.532 "strip_size_kb": 0, 00:22:57.532 "state": "online", 00:22:57.532 "raid_level": "raid1", 00:22:57.532 "superblock": true, 00:22:57.532 "num_base_bdevs": 3, 00:22:57.532 "num_base_bdevs_discovered": 3, 00:22:57.532 "num_base_bdevs_operational": 3, 00:22:57.532 "base_bdevs_list": [ 00:22:57.532 { 00:22:57.532 "name": "BaseBdev1", 00:22:57.532 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:57.532 "is_configured": true, 00:22:57.532 "data_offset": 2048, 00:22:57.532 "data_size": 63488 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "name": "BaseBdev2", 00:22:57.532 "uuid": "e9a3dad9-1356-11ef-8e8f-9dd684e56d79", 00:22:57.532 "is_configured": true, 00:22:57.532 "data_offset": 2048, 00:22:57.532 "data_size": 63488 00:22:57.532 }, 00:22:57.532 { 00:22:57.532 "name": "BaseBdev3", 00:22:57.532 "uuid": "ea72c496-1356-11ef-8e8f-9dd684e56d79", 00:22:57.532 "is_configured": true, 00:22:57.532 "data_offset": 2048, 00:22:57.532 "data_size": 63488 00:22:57.532 } 00:22:57.532 ] 00:22:57.532 } 00:22:57.532 } 00:22:57.532 }' 00:22:57.532 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:57.532 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:22:57.532 BaseBdev2 00:22:57.532 BaseBdev3' 00:22:57.532 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:57.532 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:57.532 07:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:57.791 "name": "BaseBdev1", 00:22:57.791 "aliases": [ 00:22:57.791 "e823507e-1356-11ef-8e8f-9dd684e56d79" 00:22:57.791 ], 00:22:57.791 "product_name": "Malloc disk", 00:22:57.791 "block_size": 512, 00:22:57.791 "num_blocks": 65536, 00:22:57.791 "uuid": "e823507e-1356-11ef-8e8f-9dd684e56d79", 00:22:57.791 "assigned_rate_limits": { 00:22:57.791 "rw_ios_per_sec": 0, 00:22:57.791 "rw_mbytes_per_sec": 0, 00:22:57.791 "r_mbytes_per_sec": 0, 00:22:57.791 "w_mbytes_per_sec": 0 00:22:57.791 }, 00:22:57.791 "claimed": true, 00:22:57.791 "claim_type": "exclusive_write", 00:22:57.791 "zoned": false, 00:22:57.791 "supported_io_types": { 00:22:57.791 "read": true, 00:22:57.791 "write": true, 00:22:57.791 "unmap": true, 00:22:57.791 "write_zeroes": true, 00:22:57.791 "flush": true, 00:22:57.791 "reset": true, 00:22:57.791 "compare": false, 00:22:57.791 "compare_and_write": false, 00:22:57.791 "abort": true, 00:22:57.791 "nvme_admin": false, 00:22:57.791 "nvme_io": false 00:22:57.791 }, 00:22:57.791 "memory_domains": [ 00:22:57.791 { 00:22:57.791 "dma_device_id": "system", 00:22:57.791 "dma_device_type": 1 00:22:57.791 }, 00:22:57.791 { 00:22:57.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.791 "dma_device_type": 2 00:22:57.791 } 00:22:57.791 ], 00:22:57.791 "driver_specific": {} 00:22:57.791 }' 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:57.791 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:58.059 "name": "BaseBdev2", 00:22:58.059 "aliases": [ 00:22:58.059 "e9a3dad9-1356-11ef-8e8f-9dd684e56d79" 00:22:58.059 ], 00:22:58.059 "product_name": "Malloc disk", 00:22:58.059 "block_size": 512, 00:22:58.059 "num_blocks": 65536, 00:22:58.059 "uuid": "e9a3dad9-1356-11ef-8e8f-9dd684e56d79", 00:22:58.059 "assigned_rate_limits": { 00:22:58.059 "rw_ios_per_sec": 0, 00:22:58.059 "rw_mbytes_per_sec": 0, 00:22:58.059 "r_mbytes_per_sec": 0, 00:22:58.059 "w_mbytes_per_sec": 0 00:22:58.059 }, 00:22:58.059 "claimed": true, 00:22:58.059 "claim_type": "exclusive_write", 00:22:58.059 "zoned": false, 00:22:58.059 "supported_io_types": { 00:22:58.059 "read": true, 00:22:58.059 "write": true, 00:22:58.059 "unmap": true, 00:22:58.059 "write_zeroes": true, 00:22:58.059 "flush": true, 00:22:58.059 "reset": true, 00:22:58.059 "compare": false, 00:22:58.059 "compare_and_write": false, 00:22:58.059 "abort": true, 00:22:58.059 "nvme_admin": false, 00:22:58.059 "nvme_io": false 00:22:58.059 }, 00:22:58.059 "memory_domains": [ 00:22:58.059 { 00:22:58.059 "dma_device_id": "system", 00:22:58.059 "dma_device_type": 1 00:22:58.059 }, 00:22:58.059 { 00:22:58.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.059 "dma_device_type": 2 00:22:58.059 } 00:22:58.059 ], 00:22:58.059 "driver_specific": {} 00:22:58.059 }' 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:58.059 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:22:58.318 "name": "BaseBdev3", 00:22:58.318 "aliases": [ 00:22:58.318 "ea72c496-1356-11ef-8e8f-9dd684e56d79" 00:22:58.318 ], 00:22:58.318 "product_name": "Malloc disk", 00:22:58.318 "block_size": 512, 00:22:58.318 "num_blocks": 65536, 00:22:58.318 "uuid": "ea72c496-1356-11ef-8e8f-9dd684e56d79", 00:22:58.318 "assigned_rate_limits": { 00:22:58.318 "rw_ios_per_sec": 0, 00:22:58.318 "rw_mbytes_per_sec": 0, 00:22:58.318 "r_mbytes_per_sec": 0, 00:22:58.318 "w_mbytes_per_sec": 0 00:22:58.318 }, 00:22:58.318 "claimed": true, 00:22:58.318 "claim_type": "exclusive_write", 00:22:58.318 "zoned": false, 00:22:58.318 "supported_io_types": { 00:22:58.318 "read": true, 00:22:58.318 "write": true, 00:22:58.318 "unmap": true, 00:22:58.318 "write_zeroes": true, 00:22:58.318 "flush": true, 00:22:58.318 "reset": true, 00:22:58.318 "compare": false, 00:22:58.318 "compare_and_write": false, 00:22:58.318 "abort": true, 00:22:58.318 "nvme_admin": false, 00:22:58.318 "nvme_io": false 00:22:58.318 }, 00:22:58.318 "memory_domains": [ 00:22:58.318 { 00:22:58.318 "dma_device_id": "system", 00:22:58.318 "dma_device_type": 1 00:22:58.318 }, 00:22:58.318 { 00:22:58.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.318 "dma_device_type": 2 00:22:58.318 } 00:22:58.318 ], 00:22:58.318 "driver_specific": {} 00:22:58.318 }' 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:58.318 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:58.319 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:22:58.319 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:22:58.319 07:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:58.580 [2024-05-16 07:35:52.060058] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.580 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.839 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:58.839 "name": "Existed_Raid", 00:22:58.839 "uuid": "e923ad3b-1356-11ef-8e8f-9dd684e56d79", 00:22:58.839 "strip_size_kb": 0, 00:22:58.839 "state": "online", 00:22:58.839 "raid_level": "raid1", 00:22:58.839 "superblock": true, 00:22:58.839 "num_base_bdevs": 3, 00:22:58.839 "num_base_bdevs_discovered": 2, 00:22:58.839 "num_base_bdevs_operational": 2, 00:22:58.839 "base_bdevs_list": [ 00:22:58.839 { 00:22:58.839 "name": null, 00:22:58.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.839 "is_configured": false, 00:22:58.839 "data_offset": 2048, 00:22:58.839 "data_size": 63488 00:22:58.839 }, 00:22:58.839 { 00:22:58.839 "name": "BaseBdev2", 00:22:58.839 "uuid": "e9a3dad9-1356-11ef-8e8f-9dd684e56d79", 00:22:58.839 "is_configured": true, 00:22:58.839 "data_offset": 2048, 00:22:58.839 "data_size": 63488 00:22:58.839 }, 00:22:58.839 { 00:22:58.839 "name": "BaseBdev3", 00:22:58.839 "uuid": "ea72c496-1356-11ef-8e8f-9dd684e56d79", 00:22:58.839 "is_configured": true, 00:22:58.839 "data_offset": 2048, 00:22:58.839 "data_size": 63488 00:22:58.839 } 00:22:58.839 ] 00:22:58.839 }' 00:22:58.839 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:58.839 07:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.406 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:59.406 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:59.406 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.406 07:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:22:59.665 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:22:59.665 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:59.665 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:59.924 [2024-05-16 07:35:53.312828] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:59.924 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:59.924 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:59.924 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.924 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:00.182 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:00.182 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.182 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:00.439 [2024-05-16 07:35:53.837496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:00.439 [2024-05-16 07:35:53.837547] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.439 [2024-05-16 07:35:53.842311] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.439 [2024-05-16 07:35:53.842325] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.439 [2024-05-16 07:35:53.842329] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a52fa00 name Existed_Raid, state offline 00:23:00.439 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:00.439 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:00.439 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.439 07:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 3 -gt 2 ']' 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.695 BaseBdev2 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:00.695 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.953 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:01.211 [ 00:23:01.211 { 00:23:01.211 "name": "BaseBdev2", 00:23:01.211 "aliases": [ 00:23:01.211 "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79" 00:23:01.211 ], 00:23:01.211 "product_name": "Malloc disk", 00:23:01.211 "block_size": 512, 00:23:01.211 "num_blocks": 65536, 00:23:01.211 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:01.211 "assigned_rate_limits": { 00:23:01.211 "rw_ios_per_sec": 0, 00:23:01.211 "rw_mbytes_per_sec": 0, 00:23:01.211 "r_mbytes_per_sec": 0, 00:23:01.211 "w_mbytes_per_sec": 0 00:23:01.211 }, 00:23:01.211 "claimed": false, 00:23:01.211 "zoned": false, 00:23:01.211 "supported_io_types": { 00:23:01.211 "read": true, 00:23:01.211 "write": true, 00:23:01.211 "unmap": true, 00:23:01.211 "write_zeroes": true, 00:23:01.211 "flush": true, 00:23:01.211 "reset": true, 00:23:01.211 "compare": false, 00:23:01.211 "compare_and_write": false, 00:23:01.211 "abort": true, 00:23:01.211 "nvme_admin": false, 00:23:01.211 "nvme_io": false 00:23:01.211 }, 00:23:01.211 "memory_domains": [ 00:23:01.211 { 00:23:01.211 "dma_device_id": "system", 00:23:01.211 "dma_device_type": 1 00:23:01.211 }, 00:23:01.211 { 00:23:01.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.211 "dma_device_type": 2 00:23:01.211 } 00:23:01.211 ], 00:23:01.211 "driver_specific": {} 00:23:01.211 } 00:23:01.211 ] 00:23:01.211 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:01.211 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:23:01.211 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:01.211 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:01.528 BaseBdev3 00:23:01.528 07:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:23:01.528 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:01.528 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:01.528 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:01.528 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:01.529 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:01.529 07:35:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.785 07:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:01.785 [ 00:23:01.785 { 00:23:01.785 "name": "BaseBdev3", 00:23:01.785 "aliases": [ 00:23:01.785 "edcad505-1356-11ef-8e8f-9dd684e56d79" 00:23:01.785 ], 00:23:01.785 "product_name": "Malloc disk", 00:23:01.785 "block_size": 512, 00:23:01.785 "num_blocks": 65536, 00:23:01.785 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:01.785 "assigned_rate_limits": { 00:23:01.785 "rw_ios_per_sec": 0, 00:23:01.785 "rw_mbytes_per_sec": 0, 00:23:01.785 "r_mbytes_per_sec": 0, 00:23:01.785 "w_mbytes_per_sec": 0 00:23:01.785 }, 00:23:01.785 "claimed": false, 00:23:01.785 "zoned": false, 00:23:01.785 "supported_io_types": { 00:23:01.785 "read": true, 00:23:01.785 "write": true, 00:23:01.785 "unmap": true, 00:23:01.785 "write_zeroes": true, 00:23:01.785 "flush": true, 00:23:01.785 "reset": true, 00:23:01.785 "compare": false, 00:23:01.785 "compare_and_write": false, 00:23:01.785 "abort": true, 00:23:01.785 "nvme_admin": false, 00:23:01.785 "nvme_io": false 00:23:01.785 }, 00:23:01.785 "memory_domains": [ 00:23:01.785 { 00:23:01.785 "dma_device_id": "system", 00:23:01.785 "dma_device_type": 1 00:23:01.785 }, 00:23:01.785 { 00:23:01.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.785 "dma_device_type": 2 00:23:01.785 } 00:23:01.785 ], 00:23:01.785 "driver_specific": {} 00:23:01.785 } 00:23:01.785 ] 00:23:01.785 07:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:01.785 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:23:01.785 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:01.785 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:02.042 [2024-05-16 07:35:55.522301] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:02.042 [2024-05-16 07:35:55.522343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:02.042 [2024-05-16 07:35:55.522349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.042 [2024-05-16 07:35:55.522717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.042 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.304 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.304 "name": "Existed_Raid", 00:23:02.304 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:02.304 "strip_size_kb": 0, 00:23:02.304 "state": "configuring", 00:23:02.304 "raid_level": "raid1", 00:23:02.304 "superblock": true, 00:23:02.304 "num_base_bdevs": 3, 00:23:02.304 "num_base_bdevs_discovered": 2, 00:23:02.304 "num_base_bdevs_operational": 3, 00:23:02.304 "base_bdevs_list": [ 00:23:02.304 { 00:23:02.304 "name": "BaseBdev1", 00:23:02.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.304 "is_configured": false, 00:23:02.304 "data_offset": 0, 00:23:02.304 "data_size": 0 00:23:02.304 }, 00:23:02.304 { 00:23:02.304 "name": "BaseBdev2", 00:23:02.304 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:02.304 "is_configured": true, 00:23:02.304 "data_offset": 2048, 00:23:02.304 "data_size": 63488 00:23:02.304 }, 00:23:02.304 { 00:23:02.304 "name": "BaseBdev3", 00:23:02.304 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:02.304 "is_configured": true, 00:23:02.304 "data_offset": 2048, 00:23:02.304 "data_size": 63488 00:23:02.304 } 00:23:02.304 ] 00:23:02.304 }' 00:23:02.304 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.304 07:35:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.567 07:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:02.825 [2024-05-16 07:35:56.214313] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.825 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.083 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.083 "name": "Existed_Raid", 00:23:03.083 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:03.083 "strip_size_kb": 0, 00:23:03.083 "state": "configuring", 00:23:03.083 "raid_level": "raid1", 00:23:03.083 "superblock": true, 00:23:03.083 "num_base_bdevs": 3, 00:23:03.083 "num_base_bdevs_discovered": 1, 00:23:03.083 "num_base_bdevs_operational": 3, 00:23:03.083 "base_bdevs_list": [ 00:23:03.083 { 00:23:03.083 "name": "BaseBdev1", 00:23:03.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.083 "is_configured": false, 00:23:03.083 "data_offset": 0, 00:23:03.083 "data_size": 0 00:23:03.083 }, 00:23:03.083 { 00:23:03.083 "name": null, 00:23:03.083 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:03.083 "is_configured": false, 00:23:03.083 "data_offset": 2048, 00:23:03.083 "data_size": 63488 00:23:03.083 }, 00:23:03.083 { 00:23:03.083 "name": "BaseBdev3", 00:23:03.083 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:03.083 "is_configured": true, 00:23:03.083 "data_offset": 2048, 00:23:03.083 "data_size": 63488 00:23:03.083 } 00:23:03.083 ] 00:23:03.083 }' 00:23:03.083 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.083 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.342 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.342 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:03.342 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:23:03.342 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:03.600 [2024-05-16 07:35:57.070417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.600 BaseBdev1 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:03.600 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:03.859 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:04.118 [ 00:23:04.118 { 00:23:04.118 "name": "BaseBdev1", 00:23:04.118 "aliases": [ 00:23:04.118 "ef1027be-1356-11ef-8e8f-9dd684e56d79" 00:23:04.118 ], 00:23:04.118 "product_name": "Malloc disk", 00:23:04.118 "block_size": 512, 00:23:04.118 "num_blocks": 65536, 00:23:04.118 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:04.118 "assigned_rate_limits": { 00:23:04.118 "rw_ios_per_sec": 0, 00:23:04.118 "rw_mbytes_per_sec": 0, 00:23:04.118 "r_mbytes_per_sec": 0, 00:23:04.118 "w_mbytes_per_sec": 0 00:23:04.118 }, 00:23:04.118 "claimed": true, 00:23:04.118 "claim_type": "exclusive_write", 00:23:04.118 "zoned": false, 00:23:04.118 "supported_io_types": { 00:23:04.118 "read": true, 00:23:04.118 "write": true, 00:23:04.118 "unmap": true, 00:23:04.118 "write_zeroes": true, 00:23:04.118 "flush": true, 00:23:04.118 "reset": true, 00:23:04.118 "compare": false, 00:23:04.118 "compare_and_write": false, 00:23:04.118 "abort": true, 00:23:04.118 "nvme_admin": false, 00:23:04.118 "nvme_io": false 00:23:04.118 }, 00:23:04.118 "memory_domains": [ 00:23:04.118 { 00:23:04.118 "dma_device_id": "system", 00:23:04.118 "dma_device_type": 1 00:23:04.118 }, 00:23:04.118 { 00:23:04.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.118 "dma_device_type": 2 00:23:04.118 } 00:23:04.118 ], 00:23:04.118 "driver_specific": {} 00:23:04.118 } 00:23:04.118 ] 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.118 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.376 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.376 "name": "Existed_Raid", 00:23:04.376 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:04.376 "strip_size_kb": 0, 00:23:04.376 "state": "configuring", 00:23:04.376 "raid_level": "raid1", 00:23:04.376 "superblock": true, 00:23:04.376 "num_base_bdevs": 3, 00:23:04.376 "num_base_bdevs_discovered": 2, 00:23:04.376 "num_base_bdevs_operational": 3, 00:23:04.376 "base_bdevs_list": [ 00:23:04.376 { 00:23:04.376 "name": "BaseBdev1", 00:23:04.376 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:04.376 "is_configured": true, 00:23:04.376 "data_offset": 2048, 00:23:04.376 "data_size": 63488 00:23:04.376 }, 00:23:04.376 { 00:23:04.376 "name": null, 00:23:04.376 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:04.376 "is_configured": false, 00:23:04.376 "data_offset": 2048, 00:23:04.376 "data_size": 63488 00:23:04.376 }, 00:23:04.376 { 00:23:04.376 "name": "BaseBdev3", 00:23:04.376 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:04.376 "is_configured": true, 00:23:04.376 "data_offset": 2048, 00:23:04.376 "data_size": 63488 00:23:04.376 } 00:23:04.376 ] 00:23:04.376 }' 00:23:04.376 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.376 07:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.634 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.634 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:04.893 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:04.893 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:05.196 [2024-05-16 07:35:58.566340] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.196 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.455 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.455 "name": "Existed_Raid", 00:23:05.455 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:05.455 "strip_size_kb": 0, 00:23:05.455 "state": "configuring", 00:23:05.455 "raid_level": "raid1", 00:23:05.455 "superblock": true, 00:23:05.455 "num_base_bdevs": 3, 00:23:05.455 "num_base_bdevs_discovered": 1, 00:23:05.455 "num_base_bdevs_operational": 3, 00:23:05.455 "base_bdevs_list": [ 00:23:05.455 { 00:23:05.455 "name": "BaseBdev1", 00:23:05.455 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:05.455 "is_configured": true, 00:23:05.455 "data_offset": 2048, 00:23:05.455 "data_size": 63488 00:23:05.455 }, 00:23:05.455 { 00:23:05.455 "name": null, 00:23:05.455 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:05.455 "is_configured": false, 00:23:05.455 "data_offset": 2048, 00:23:05.455 "data_size": 63488 00:23:05.455 }, 00:23:05.455 { 00:23:05.455 "name": null, 00:23:05.455 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:05.455 "is_configured": false, 00:23:05.455 "data_offset": 2048, 00:23:05.455 "data_size": 63488 00:23:05.455 } 00:23:05.455 ] 00:23:05.455 }' 00:23:05.455 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.455 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.714 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.714 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:05.972 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:23:05.972 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:06.229 [2024-05-16 07:35:59.630390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.229 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.488 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.488 "name": "Existed_Raid", 00:23:06.488 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:06.488 "strip_size_kb": 0, 00:23:06.488 "state": "configuring", 00:23:06.488 "raid_level": "raid1", 00:23:06.488 "superblock": true, 00:23:06.488 "num_base_bdevs": 3, 00:23:06.488 "num_base_bdevs_discovered": 2, 00:23:06.488 "num_base_bdevs_operational": 3, 00:23:06.488 "base_bdevs_list": [ 00:23:06.488 { 00:23:06.488 "name": "BaseBdev1", 00:23:06.488 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:06.488 "is_configured": true, 00:23:06.488 "data_offset": 2048, 00:23:06.488 "data_size": 63488 00:23:06.488 }, 00:23:06.488 { 00:23:06.488 "name": null, 00:23:06.488 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:06.488 "is_configured": false, 00:23:06.488 "data_offset": 2048, 00:23:06.488 "data_size": 63488 00:23:06.488 }, 00:23:06.488 { 00:23:06.488 "name": "BaseBdev3", 00:23:06.488 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:06.488 "is_configured": true, 00:23:06.488 "data_offset": 2048, 00:23:06.488 "data_size": 63488 00:23:06.488 } 00:23:06.488 ] 00:23:06.488 }' 00:23:06.488 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.488 07:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.747 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.747 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:07.006 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:23:07.006 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:07.006 [2024-05-16 07:36:00.550407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.264 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.522 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.522 "name": "Existed_Raid", 00:23:07.522 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:07.522 "strip_size_kb": 0, 00:23:07.522 "state": "configuring", 00:23:07.522 "raid_level": "raid1", 00:23:07.522 "superblock": true, 00:23:07.522 "num_base_bdevs": 3, 00:23:07.522 "num_base_bdevs_discovered": 1, 00:23:07.522 "num_base_bdevs_operational": 3, 00:23:07.522 "base_bdevs_list": [ 00:23:07.522 { 00:23:07.522 "name": null, 00:23:07.522 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:07.522 "is_configured": false, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 }, 00:23:07.522 { 00:23:07.522 "name": null, 00:23:07.522 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:07.522 "is_configured": false, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 }, 00:23:07.522 { 00:23:07.522 "name": "BaseBdev3", 00:23:07.522 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:07.522 "is_configured": true, 00:23:07.522 "data_offset": 2048, 00:23:07.522 "data_size": 63488 00:23:07.522 } 00:23:07.522 ] 00:23:07.522 }' 00:23:07.522 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.522 07:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.780 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.780 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:07.780 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:23:07.780 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:08.039 [2024-05-16 07:36:01.527094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.039 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.298 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:08.298 "name": "Existed_Raid", 00:23:08.298 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:08.298 "strip_size_kb": 0, 00:23:08.298 "state": "configuring", 00:23:08.298 "raid_level": "raid1", 00:23:08.298 "superblock": true, 00:23:08.298 "num_base_bdevs": 3, 00:23:08.298 "num_base_bdevs_discovered": 2, 00:23:08.298 "num_base_bdevs_operational": 3, 00:23:08.298 "base_bdevs_list": [ 00:23:08.298 { 00:23:08.298 "name": null, 00:23:08.298 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:08.298 "is_configured": false, 00:23:08.298 "data_offset": 2048, 00:23:08.298 "data_size": 63488 00:23:08.298 }, 00:23:08.298 { 00:23:08.298 "name": "BaseBdev2", 00:23:08.298 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:08.298 "is_configured": true, 00:23:08.298 "data_offset": 2048, 00:23:08.298 "data_size": 63488 00:23:08.298 }, 00:23:08.298 { 00:23:08.298 "name": "BaseBdev3", 00:23:08.298 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:08.298 "is_configured": true, 00:23:08.298 "data_offset": 2048, 00:23:08.298 "data_size": 63488 00:23:08.298 } 00:23:08.298 ] 00:23:08.298 }' 00:23:08.298 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:08.298 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.555 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.555 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:08.813 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:23:08.813 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.813 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:09.071 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ef1027be-1356-11ef-8e8f-9dd684e56d79 00:23:09.329 [2024-05-16 07:36:02.743200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:09.329 [2024-05-16 07:36:02.743242] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a52ff00 00:23:09.329 [2024-05-16 07:36:02.743245] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:09.329 [2024-05-16 07:36:02.743261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a592e20 00:23:09.329 [2024-05-16 07:36:02.743294] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a52ff00 00:23:09.329 [2024-05-16 07:36:02.743297] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a52ff00 00:23:09.329 [2024-05-16 07:36:02.743311] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.329 NewBaseBdev 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:09.329 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:09.587 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:09.845 [ 00:23:09.845 { 00:23:09.845 "name": "NewBaseBdev", 00:23:09.845 "aliases": [ 00:23:09.845 "ef1027be-1356-11ef-8e8f-9dd684e56d79" 00:23:09.845 ], 00:23:09.845 "product_name": "Malloc disk", 00:23:09.845 "block_size": 512, 00:23:09.845 "num_blocks": 65536, 00:23:09.845 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:09.845 "assigned_rate_limits": { 00:23:09.845 "rw_ios_per_sec": 0, 00:23:09.845 "rw_mbytes_per_sec": 0, 00:23:09.845 "r_mbytes_per_sec": 0, 00:23:09.845 "w_mbytes_per_sec": 0 00:23:09.845 }, 00:23:09.845 "claimed": true, 00:23:09.845 "claim_type": "exclusive_write", 00:23:09.845 "zoned": false, 00:23:09.845 "supported_io_types": { 00:23:09.845 "read": true, 00:23:09.845 "write": true, 00:23:09.845 "unmap": true, 00:23:09.845 "write_zeroes": true, 00:23:09.845 "flush": true, 00:23:09.845 "reset": true, 00:23:09.845 "compare": false, 00:23:09.845 "compare_and_write": false, 00:23:09.845 "abort": true, 00:23:09.845 "nvme_admin": false, 00:23:09.845 "nvme_io": false 00:23:09.845 }, 00:23:09.845 "memory_domains": [ 00:23:09.845 { 00:23:09.845 "dma_device_id": "system", 00:23:09.845 "dma_device_type": 1 00:23:09.845 }, 00:23:09.845 { 00:23:09.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.845 "dma_device_type": 2 00:23:09.845 } 00:23:09.845 ], 00:23:09.845 "driver_specific": {} 00:23:09.845 } 00:23:09.845 ] 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.845 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.105 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.105 "name": "Existed_Raid", 00:23:10.105 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:10.105 "strip_size_kb": 0, 00:23:10.105 "state": "online", 00:23:10.105 "raid_level": "raid1", 00:23:10.105 "superblock": true, 00:23:10.105 "num_base_bdevs": 3, 00:23:10.105 "num_base_bdevs_discovered": 3, 00:23:10.105 "num_base_bdevs_operational": 3, 00:23:10.105 "base_bdevs_list": [ 00:23:10.105 { 00:23:10.105 "name": "NewBaseBdev", 00:23:10.105 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:10.105 "is_configured": true, 00:23:10.105 "data_offset": 2048, 00:23:10.105 "data_size": 63488 00:23:10.105 }, 00:23:10.105 { 00:23:10.105 "name": "BaseBdev2", 00:23:10.105 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:10.105 "is_configured": true, 00:23:10.105 "data_offset": 2048, 00:23:10.105 "data_size": 63488 00:23:10.105 }, 00:23:10.105 { 00:23:10.105 "name": "BaseBdev3", 00:23:10.105 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:10.105 "is_configured": true, 00:23:10.105 "data_offset": 2048, 00:23:10.105 "data_size": 63488 00:23:10.105 } 00:23:10.105 ] 00:23:10.105 }' 00:23:10.105 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.105 07:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:10.379 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:10.641 [2024-05-16 07:36:04.007139] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.641 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:10.641 "name": "Existed_Raid", 00:23:10.641 "aliases": [ 00:23:10.641 "ee23f1bc-1356-11ef-8e8f-9dd684e56d79" 00:23:10.641 ], 00:23:10.641 "product_name": "Raid Volume", 00:23:10.641 "block_size": 512, 00:23:10.641 "num_blocks": 63488, 00:23:10.641 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:10.641 "assigned_rate_limits": { 00:23:10.641 "rw_ios_per_sec": 0, 00:23:10.641 "rw_mbytes_per_sec": 0, 00:23:10.641 "r_mbytes_per_sec": 0, 00:23:10.641 "w_mbytes_per_sec": 0 00:23:10.641 }, 00:23:10.642 "claimed": false, 00:23:10.642 "zoned": false, 00:23:10.642 "supported_io_types": { 00:23:10.642 "read": true, 00:23:10.642 "write": true, 00:23:10.642 "unmap": false, 00:23:10.642 "write_zeroes": true, 00:23:10.642 "flush": false, 00:23:10.642 "reset": true, 00:23:10.642 "compare": false, 00:23:10.642 "compare_and_write": false, 00:23:10.642 "abort": false, 00:23:10.642 "nvme_admin": false, 00:23:10.642 "nvme_io": false 00:23:10.642 }, 00:23:10.642 "memory_domains": [ 00:23:10.642 { 00:23:10.642 "dma_device_id": "system", 00:23:10.642 "dma_device_type": 1 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.642 "dma_device_type": 2 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "dma_device_id": "system", 00:23:10.642 "dma_device_type": 1 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.642 "dma_device_type": 2 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "dma_device_id": "system", 00:23:10.642 "dma_device_type": 1 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.642 "dma_device_type": 2 00:23:10.642 } 00:23:10.642 ], 00:23:10.642 "driver_specific": { 00:23:10.642 "raid": { 00:23:10.642 "uuid": "ee23f1bc-1356-11ef-8e8f-9dd684e56d79", 00:23:10.642 "strip_size_kb": 0, 00:23:10.642 "state": "online", 00:23:10.642 "raid_level": "raid1", 00:23:10.642 "superblock": true, 00:23:10.642 "num_base_bdevs": 3, 00:23:10.642 "num_base_bdevs_discovered": 3, 00:23:10.642 "num_base_bdevs_operational": 3, 00:23:10.642 "base_bdevs_list": [ 00:23:10.642 { 00:23:10.642 "name": "NewBaseBdev", 00:23:10.642 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:10.642 "is_configured": true, 00:23:10.642 "data_offset": 2048, 00:23:10.642 "data_size": 63488 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "name": "BaseBdev2", 00:23:10.642 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:10.642 "is_configured": true, 00:23:10.642 "data_offset": 2048, 00:23:10.642 "data_size": 63488 00:23:10.642 }, 00:23:10.642 { 00:23:10.642 "name": "BaseBdev3", 00:23:10.642 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:10.642 "is_configured": true, 00:23:10.642 "data_offset": 2048, 00:23:10.642 "data_size": 63488 00:23:10.642 } 00:23:10.642 ] 00:23:10.642 } 00:23:10.642 } 00:23:10.642 }' 00:23:10.642 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:10.642 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:23:10.642 BaseBdev2 00:23:10.642 BaseBdev3' 00:23:10.642 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:10.642 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:10.642 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:10.901 "name": "NewBaseBdev", 00:23:10.901 "aliases": [ 00:23:10.901 "ef1027be-1356-11ef-8e8f-9dd684e56d79" 00:23:10.901 ], 00:23:10.901 "product_name": "Malloc disk", 00:23:10.901 "block_size": 512, 00:23:10.901 "num_blocks": 65536, 00:23:10.901 "uuid": "ef1027be-1356-11ef-8e8f-9dd684e56d79", 00:23:10.901 "assigned_rate_limits": { 00:23:10.901 "rw_ios_per_sec": 0, 00:23:10.901 "rw_mbytes_per_sec": 0, 00:23:10.901 "r_mbytes_per_sec": 0, 00:23:10.901 "w_mbytes_per_sec": 0 00:23:10.901 }, 00:23:10.901 "claimed": true, 00:23:10.901 "claim_type": "exclusive_write", 00:23:10.901 "zoned": false, 00:23:10.901 "supported_io_types": { 00:23:10.901 "read": true, 00:23:10.901 "write": true, 00:23:10.901 "unmap": true, 00:23:10.901 "write_zeroes": true, 00:23:10.901 "flush": true, 00:23:10.901 "reset": true, 00:23:10.901 "compare": false, 00:23:10.901 "compare_and_write": false, 00:23:10.901 "abort": true, 00:23:10.901 "nvme_admin": false, 00:23:10.901 "nvme_io": false 00:23:10.901 }, 00:23:10.901 "memory_domains": [ 00:23:10.901 { 00:23:10.901 "dma_device_id": "system", 00:23:10.901 "dma_device_type": 1 00:23:10.901 }, 00:23:10.901 { 00:23:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.901 "dma_device_type": 2 00:23:10.901 } 00:23:10.901 ], 00:23:10.901 "driver_specific": {} 00:23:10.901 }' 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:10.901 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:11.160 "name": "BaseBdev2", 00:23:11.160 "aliases": [ 00:23:11.160 "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79" 00:23:11.160 ], 00:23:11.160 "product_name": "Malloc disk", 00:23:11.160 "block_size": 512, 00:23:11.160 "num_blocks": 65536, 00:23:11.160 "uuid": "ed5cf7d6-1356-11ef-8e8f-9dd684e56d79", 00:23:11.160 "assigned_rate_limits": { 00:23:11.160 "rw_ios_per_sec": 0, 00:23:11.160 "rw_mbytes_per_sec": 0, 00:23:11.160 "r_mbytes_per_sec": 0, 00:23:11.160 "w_mbytes_per_sec": 0 00:23:11.160 }, 00:23:11.160 "claimed": true, 00:23:11.160 "claim_type": "exclusive_write", 00:23:11.160 "zoned": false, 00:23:11.160 "supported_io_types": { 00:23:11.160 "read": true, 00:23:11.160 "write": true, 00:23:11.160 "unmap": true, 00:23:11.160 "write_zeroes": true, 00:23:11.160 "flush": true, 00:23:11.160 "reset": true, 00:23:11.160 "compare": false, 00:23:11.160 "compare_and_write": false, 00:23:11.160 "abort": true, 00:23:11.160 "nvme_admin": false, 00:23:11.160 "nvme_io": false 00:23:11.160 }, 00:23:11.160 "memory_domains": [ 00:23:11.160 { 00:23:11.160 "dma_device_id": "system", 00:23:11.160 "dma_device_type": 1 00:23:11.160 }, 00:23:11.160 { 00:23:11.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.160 "dma_device_type": 2 00:23:11.160 } 00:23:11.160 ], 00:23:11.160 "driver_specific": {} 00:23:11.160 }' 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:11.160 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:11.458 "name": "BaseBdev3", 00:23:11.458 "aliases": [ 00:23:11.458 "edcad505-1356-11ef-8e8f-9dd684e56d79" 00:23:11.458 ], 00:23:11.458 "product_name": "Malloc disk", 00:23:11.458 "block_size": 512, 00:23:11.458 "num_blocks": 65536, 00:23:11.458 "uuid": "edcad505-1356-11ef-8e8f-9dd684e56d79", 00:23:11.458 "assigned_rate_limits": { 00:23:11.458 "rw_ios_per_sec": 0, 00:23:11.458 "rw_mbytes_per_sec": 0, 00:23:11.458 "r_mbytes_per_sec": 0, 00:23:11.458 "w_mbytes_per_sec": 0 00:23:11.458 }, 00:23:11.458 "claimed": true, 00:23:11.458 "claim_type": "exclusive_write", 00:23:11.458 "zoned": false, 00:23:11.458 "supported_io_types": { 00:23:11.458 "read": true, 00:23:11.458 "write": true, 00:23:11.458 "unmap": true, 00:23:11.458 "write_zeroes": true, 00:23:11.458 "flush": true, 00:23:11.458 "reset": true, 00:23:11.458 "compare": false, 00:23:11.458 "compare_and_write": false, 00:23:11.458 "abort": true, 00:23:11.458 "nvme_admin": false, 00:23:11.458 "nvme_io": false 00:23:11.458 }, 00:23:11.458 "memory_domains": [ 00:23:11.458 { 00:23:11.458 "dma_device_id": "system", 00:23:11.458 "dma_device_type": 1 00:23:11.458 }, 00:23:11.458 { 00:23:11.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.458 "dma_device_type": 2 00:23:11.458 } 00:23:11.458 ], 00:23:11.458 "driver_specific": {} 00:23:11.458 }' 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:11.458 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:11.732 [2024-05-16 07:36:05.151133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:11.732 [2024-05-16 07:36:05.151155] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.732 [2024-05-16 07:36:05.151168] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.732 [2024-05-16 07:36:05.151228] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.732 [2024-05-16 07:36:05.151232] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a52ff00 name Existed_Raid, state offline 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 56374 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 56374 ']' 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 56374 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 56374 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:23:11.732 killing process with pid 56374 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56374' 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 56374 00:23:11.732 [2024-05-16 07:36:05.179788] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:11.732 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 56374 00:23:11.732 [2024-05-16 07:36:05.194062] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:11.990 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:23:11.990 00:23:11.990 real 0m22.449s 00:23:11.990 user 0m41.003s 00:23:11.990 sys 0m3.153s 00:23:11.990 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:11.990 ************************************ 00:23:11.990 END TEST raid_state_function_test_sb 00:23:11.990 ************************************ 00:23:11.990 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.990 07:36:05 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:23:11.990 07:36:05 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:11.990 07:36:05 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:11.990 07:36:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:11.990 ************************************ 00:23:11.990 START TEST raid_superblock_test 00:23:11.990 ************************************ 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=57094 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 57094 /var/tmp/spdk-raid.sock 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 57094 ']' 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.991 07:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.991 [2024-05-16 07:36:05.413739] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:23:11.991 [2024-05-16 07:36:05.413951] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:12.556 EAL: TSC is not safe to use in SMP mode 00:23:12.556 EAL: TSC is not invariant 00:23:12.556 [2024-05-16 07:36:05.879952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.556 [2024-05-16 07:36:05.963798] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:12.556 [2024-05-16 07:36:05.965968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.556 [2024-05-16 07:36:05.966667] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:12.556 [2024-05-16 07:36:05.966682] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:13.123 malloc1 00:23:13.123 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:13.381 [2024-05-16 07:36:06.933278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:13.381 [2024-05-16 07:36:06.933330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.381 [2024-05-16 07:36:06.933889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7db780 00:23:13.381 [2024-05-16 07:36:06.933923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.381 [2024-05-16 07:36:06.934677] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.381 [2024-05-16 07:36:06.934710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:13.381 pt1 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.639 07:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:13.639 malloc2 00:23:13.639 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:13.896 [2024-05-16 07:36:07.365278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:13.896 [2024-05-16 07:36:07.365352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.896 [2024-05-16 07:36:07.365379] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dbc80 00:23:13.897 [2024-05-16 07:36:07.365387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.897 [2024-05-16 07:36:07.365892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.897 [2024-05-16 07:36:07.365916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:13.897 pt2 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.897 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:14.152 malloc3 00:23:14.152 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:14.416 [2024-05-16 07:36:07.805266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:14.416 [2024-05-16 07:36:07.805316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.416 [2024-05-16 07:36:07.805339] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dc180 00:23:14.416 [2024-05-16 07:36:07.805345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.416 [2024-05-16 07:36:07.805790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.416 [2024-05-16 07:36:07.805819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:14.416 pt3 00:23:14.416 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:14.416 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:14.416 07:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:14.674 [2024-05-16 07:36:08.113286] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:14.674 [2024-05-16 07:36:08.113727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.674 [2024-05-16 07:36:08.113746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:14.674 [2024-05-16 07:36:08.113789] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b7dc400 00:23:14.674 [2024-05-16 07:36:08.113794] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:14.674 [2024-05-16 07:36:08.113821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b83ee20 00:23:14.674 [2024-05-16 07:36:08.113876] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b7dc400 00:23:14.674 [2024-05-16 07:36:08.113879] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b7dc400 00:23:14.674 [2024-05-16 07:36:08.113899] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.674 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.934 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.934 "name": "raid_bdev1", 00:23:14.934 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:14.934 "strip_size_kb": 0, 00:23:14.934 "state": "online", 00:23:14.934 "raid_level": "raid1", 00:23:14.934 "superblock": true, 00:23:14.934 "num_base_bdevs": 3, 00:23:14.934 "num_base_bdevs_discovered": 3, 00:23:14.934 "num_base_bdevs_operational": 3, 00:23:14.934 "base_bdevs_list": [ 00:23:14.934 { 00:23:14.934 "name": "pt1", 00:23:14.934 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:14.934 "is_configured": true, 00:23:14.934 "data_offset": 2048, 00:23:14.934 "data_size": 63488 00:23:14.934 }, 00:23:14.934 { 00:23:14.934 "name": "pt2", 00:23:14.934 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:14.934 "is_configured": true, 00:23:14.934 "data_offset": 2048, 00:23:14.934 "data_size": 63488 00:23:14.934 }, 00:23:14.934 { 00:23:14.934 "name": "pt3", 00:23:14.934 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:14.934 "is_configured": true, 00:23:14.934 "data_offset": 2048, 00:23:14.934 "data_size": 63488 00:23:14.934 } 00:23:14.934 ] 00:23:14.934 }' 00:23:14.934 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.934 07:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:15.192 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:15.450 [2024-05-16 07:36:08.921308] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.450 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:15.450 "name": "raid_bdev1", 00:23:15.450 "aliases": [ 00:23:15.450 "f5a52ccd-1356-11ef-8e8f-9dd684e56d79" 00:23:15.450 ], 00:23:15.450 "product_name": "Raid Volume", 00:23:15.450 "block_size": 512, 00:23:15.450 "num_blocks": 63488, 00:23:15.450 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:15.450 "assigned_rate_limits": { 00:23:15.450 "rw_ios_per_sec": 0, 00:23:15.450 "rw_mbytes_per_sec": 0, 00:23:15.450 "r_mbytes_per_sec": 0, 00:23:15.450 "w_mbytes_per_sec": 0 00:23:15.450 }, 00:23:15.450 "claimed": false, 00:23:15.450 "zoned": false, 00:23:15.451 "supported_io_types": { 00:23:15.451 "read": true, 00:23:15.451 "write": true, 00:23:15.451 "unmap": false, 00:23:15.451 "write_zeroes": true, 00:23:15.451 "flush": false, 00:23:15.451 "reset": true, 00:23:15.451 "compare": false, 00:23:15.451 "compare_and_write": false, 00:23:15.451 "abort": false, 00:23:15.451 "nvme_admin": false, 00:23:15.451 "nvme_io": false 00:23:15.451 }, 00:23:15.451 "memory_domains": [ 00:23:15.451 { 00:23:15.451 "dma_device_id": "system", 00:23:15.451 "dma_device_type": 1 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.451 "dma_device_type": 2 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "dma_device_id": "system", 00:23:15.451 "dma_device_type": 1 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.451 "dma_device_type": 2 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "dma_device_id": "system", 00:23:15.451 "dma_device_type": 1 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.451 "dma_device_type": 2 00:23:15.451 } 00:23:15.451 ], 00:23:15.451 "driver_specific": { 00:23:15.451 "raid": { 00:23:15.451 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:15.451 "strip_size_kb": 0, 00:23:15.451 "state": "online", 00:23:15.451 "raid_level": "raid1", 00:23:15.451 "superblock": true, 00:23:15.451 "num_base_bdevs": 3, 00:23:15.451 "num_base_bdevs_discovered": 3, 00:23:15.451 "num_base_bdevs_operational": 3, 00:23:15.451 "base_bdevs_list": [ 00:23:15.451 { 00:23:15.451 "name": "pt1", 00:23:15.451 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:15.451 "is_configured": true, 00:23:15.451 "data_offset": 2048, 00:23:15.451 "data_size": 63488 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "name": "pt2", 00:23:15.451 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:15.451 "is_configured": true, 00:23:15.451 "data_offset": 2048, 00:23:15.451 "data_size": 63488 00:23:15.451 }, 00:23:15.451 { 00:23:15.451 "name": "pt3", 00:23:15.451 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:15.451 "is_configured": true, 00:23:15.451 "data_offset": 2048, 00:23:15.451 "data_size": 63488 00:23:15.451 } 00:23:15.451 ] 00:23:15.451 } 00:23:15.451 } 00:23:15.451 }' 00:23:15.451 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:15.451 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:15.451 pt2 00:23:15.451 pt3' 00:23:15.451 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:15.451 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:15.451 07:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:15.707 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:15.707 "name": "pt1", 00:23:15.707 "aliases": [ 00:23:15.707 "a5798faa-d2e2-2650-8188-2b578831c492" 00:23:15.707 ], 00:23:15.707 "product_name": "passthru", 00:23:15.707 "block_size": 512, 00:23:15.707 "num_blocks": 65536, 00:23:15.707 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:15.707 "assigned_rate_limits": { 00:23:15.707 "rw_ios_per_sec": 0, 00:23:15.707 "rw_mbytes_per_sec": 0, 00:23:15.707 "r_mbytes_per_sec": 0, 00:23:15.707 "w_mbytes_per_sec": 0 00:23:15.707 }, 00:23:15.707 "claimed": true, 00:23:15.707 "claim_type": "exclusive_write", 00:23:15.707 "zoned": false, 00:23:15.707 "supported_io_types": { 00:23:15.707 "read": true, 00:23:15.707 "write": true, 00:23:15.707 "unmap": true, 00:23:15.707 "write_zeroes": true, 00:23:15.707 "flush": true, 00:23:15.707 "reset": true, 00:23:15.707 "compare": false, 00:23:15.707 "compare_and_write": false, 00:23:15.707 "abort": true, 00:23:15.707 "nvme_admin": false, 00:23:15.707 "nvme_io": false 00:23:15.707 }, 00:23:15.707 "memory_domains": [ 00:23:15.707 { 00:23:15.707 "dma_device_id": "system", 00:23:15.707 "dma_device_type": 1 00:23:15.707 }, 00:23:15.707 { 00:23:15.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.707 "dma_device_type": 2 00:23:15.707 } 00:23:15.707 ], 00:23:15.707 "driver_specific": { 00:23:15.707 "passthru": { 00:23:15.707 "name": "pt1", 00:23:15.707 "base_bdev_name": "malloc1" 00:23:15.707 } 00:23:15.707 } 00:23:15.707 }' 00:23:15.707 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:15.707 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:15.707 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:15.707 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:15.707 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:15.966 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:16.224 "name": "pt2", 00:23:16.224 "aliases": [ 00:23:16.224 "1e11641b-3b23-285d-9a84-bbae38d1be75" 00:23:16.224 ], 00:23:16.224 "product_name": "passthru", 00:23:16.224 "block_size": 512, 00:23:16.224 "num_blocks": 65536, 00:23:16.224 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:16.224 "assigned_rate_limits": { 00:23:16.224 "rw_ios_per_sec": 0, 00:23:16.224 "rw_mbytes_per_sec": 0, 00:23:16.224 "r_mbytes_per_sec": 0, 00:23:16.224 "w_mbytes_per_sec": 0 00:23:16.224 }, 00:23:16.224 "claimed": true, 00:23:16.224 "claim_type": "exclusive_write", 00:23:16.224 "zoned": false, 00:23:16.224 "supported_io_types": { 00:23:16.224 "read": true, 00:23:16.224 "write": true, 00:23:16.224 "unmap": true, 00:23:16.224 "write_zeroes": true, 00:23:16.224 "flush": true, 00:23:16.224 "reset": true, 00:23:16.224 "compare": false, 00:23:16.224 "compare_and_write": false, 00:23:16.224 "abort": true, 00:23:16.224 "nvme_admin": false, 00:23:16.224 "nvme_io": false 00:23:16.224 }, 00:23:16.224 "memory_domains": [ 00:23:16.224 { 00:23:16.224 "dma_device_id": "system", 00:23:16.224 "dma_device_type": 1 00:23:16.224 }, 00:23:16.224 { 00:23:16.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.224 "dma_device_type": 2 00:23:16.224 } 00:23:16.224 ], 00:23:16.224 "driver_specific": { 00:23:16.224 "passthru": { 00:23:16.224 "name": "pt2", 00:23:16.224 "base_bdev_name": "malloc2" 00:23:16.224 } 00:23:16.224 } 00:23:16.224 }' 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:16.224 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:16.483 "name": "pt3", 00:23:16.483 "aliases": [ 00:23:16.483 "2db03f12-d8ec-c855-a417-ce687f6f8230" 00:23:16.483 ], 00:23:16.483 "product_name": "passthru", 00:23:16.483 "block_size": 512, 00:23:16.483 "num_blocks": 65536, 00:23:16.483 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:16.483 "assigned_rate_limits": { 00:23:16.483 "rw_ios_per_sec": 0, 00:23:16.483 "rw_mbytes_per_sec": 0, 00:23:16.483 "r_mbytes_per_sec": 0, 00:23:16.483 "w_mbytes_per_sec": 0 00:23:16.483 }, 00:23:16.483 "claimed": true, 00:23:16.483 "claim_type": "exclusive_write", 00:23:16.483 "zoned": false, 00:23:16.483 "supported_io_types": { 00:23:16.483 "read": true, 00:23:16.483 "write": true, 00:23:16.483 "unmap": true, 00:23:16.483 "write_zeroes": true, 00:23:16.483 "flush": true, 00:23:16.483 "reset": true, 00:23:16.483 "compare": false, 00:23:16.483 "compare_and_write": false, 00:23:16.483 "abort": true, 00:23:16.483 "nvme_admin": false, 00:23:16.483 "nvme_io": false 00:23:16.483 }, 00:23:16.483 "memory_domains": [ 00:23:16.483 { 00:23:16.483 "dma_device_id": "system", 00:23:16.483 "dma_device_type": 1 00:23:16.483 }, 00:23:16.483 { 00:23:16.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.483 "dma_device_type": 2 00:23:16.483 } 00:23:16.483 ], 00:23:16.483 "driver_specific": { 00:23:16.483 "passthru": { 00:23:16.483 "name": "pt3", 00:23:16.483 "base_bdev_name": "malloc3" 00:23:16.483 } 00:23:16.483 } 00:23:16.483 }' 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:16.483 07:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:16.740 [2024-05-16 07:36:10.161342] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.740 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f5a52ccd-1356-11ef-8e8f-9dd684e56d79 00:23:16.740 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f5a52ccd-1356-11ef-8e8f-9dd684e56d79 ']' 00:23:16.740 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:16.997 [2024-05-16 07:36:10.377283] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.997 [2024-05-16 07:36:10.377303] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.997 [2024-05-16 07:36:10.377318] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.997 [2024-05-16 07:36:10.377333] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.997 [2024-05-16 07:36:10.377337] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7dc400 name raid_bdev1, state offline 00:23:16.997 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.997 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:17.255 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:17.255 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:17.255 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:17.255 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:17.514 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:17.514 07:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:17.772 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:17.772 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:18.030 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:18.030 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:18.287 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:18.544 [2024-05-16 07:36:11.921302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:18.544 [2024-05-16 07:36:11.921771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:18.544 [2024-05-16 07:36:11.921784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:18.544 [2024-05-16 07:36:11.921797] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:18.544 [2024-05-16 07:36:11.921836] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:18.544 [2024-05-16 07:36:11.921845] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:18.544 [2024-05-16 07:36:11.921853] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.544 [2024-05-16 07:36:11.921858] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7dc180 name raid_bdev1, state configuring 00:23:18.544 request: 00:23:18.544 { 00:23:18.544 "name": "raid_bdev1", 00:23:18.544 "raid_level": "raid1", 00:23:18.544 "base_bdevs": [ 00:23:18.544 "malloc1", 00:23:18.544 "malloc2", 00:23:18.544 "malloc3" 00:23:18.544 ], 00:23:18.544 "superblock": false, 00:23:18.544 "method": "bdev_raid_create", 00:23:18.544 "req_id": 1 00:23:18.544 } 00:23:18.544 Got JSON-RPC error response 00:23:18.544 response: 00:23:18.544 { 00:23:18.544 "code": -17, 00:23:18.544 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:18.544 } 00:23:18.544 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:23:18.544 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.544 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.544 07:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.544 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.544 07:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:18.802 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:18.802 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:18.802 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:19.059 [2024-05-16 07:36:12.489308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:19.059 [2024-05-16 07:36:12.489358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.059 [2024-05-16 07:36:12.489382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dbc80 00:23:19.059 [2024-05-16 07:36:12.489389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.059 [2024-05-16 07:36:12.489881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.059 [2024-05-16 07:36:12.489909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:19.059 [2024-05-16 07:36:12.489928] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:19.059 [2024-05-16 07:36:12.489937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:19.059 pt1 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.059 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.316 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.316 "name": "raid_bdev1", 00:23:19.316 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:19.316 "strip_size_kb": 0, 00:23:19.316 "state": "configuring", 00:23:19.316 "raid_level": "raid1", 00:23:19.316 "superblock": true, 00:23:19.316 "num_base_bdevs": 3, 00:23:19.316 "num_base_bdevs_discovered": 1, 00:23:19.316 "num_base_bdevs_operational": 3, 00:23:19.316 "base_bdevs_list": [ 00:23:19.316 { 00:23:19.316 "name": "pt1", 00:23:19.316 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:19.316 "is_configured": true, 00:23:19.316 "data_offset": 2048, 00:23:19.316 "data_size": 63488 00:23:19.316 }, 00:23:19.316 { 00:23:19.316 "name": null, 00:23:19.316 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:19.316 "is_configured": false, 00:23:19.316 "data_offset": 2048, 00:23:19.316 "data_size": 63488 00:23:19.316 }, 00:23:19.316 { 00:23:19.316 "name": null, 00:23:19.316 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:19.316 "is_configured": false, 00:23:19.316 "data_offset": 2048, 00:23:19.316 "data_size": 63488 00:23:19.316 } 00:23:19.316 ] 00:23:19.316 }' 00:23:19.316 07:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.316 07:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.574 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:23:19.574 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.832 [2024-05-16 07:36:13.249304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.832 [2024-05-16 07:36:13.249345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.832 [2024-05-16 07:36:13.249367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dc680 00:23:19.832 [2024-05-16 07:36:13.249374] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.832 [2024-05-16 07:36:13.249458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.832 [2024-05-16 07:36:13.249466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.832 [2024-05-16 07:36:13.249480] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:19.832 [2024-05-16 07:36:13.249486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.832 pt2 00:23:19.832 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:20.090 [2024-05-16 07:36:13.437315] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.090 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.346 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.346 "name": "raid_bdev1", 00:23:20.346 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:20.346 "strip_size_kb": 0, 00:23:20.346 "state": "configuring", 00:23:20.346 "raid_level": "raid1", 00:23:20.346 "superblock": true, 00:23:20.346 "num_base_bdevs": 3, 00:23:20.346 "num_base_bdevs_discovered": 1, 00:23:20.346 "num_base_bdevs_operational": 3, 00:23:20.346 "base_bdevs_list": [ 00:23:20.346 { 00:23:20.346 "name": "pt1", 00:23:20.346 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:20.346 "is_configured": true, 00:23:20.346 "data_offset": 2048, 00:23:20.346 "data_size": 63488 00:23:20.346 }, 00:23:20.346 { 00:23:20.346 "name": null, 00:23:20.346 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:20.346 "is_configured": false, 00:23:20.346 "data_offset": 2048, 00:23:20.346 "data_size": 63488 00:23:20.346 }, 00:23:20.346 { 00:23:20.346 "name": null, 00:23:20.346 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:20.346 "is_configured": false, 00:23:20.346 "data_offset": 2048, 00:23:20.346 "data_size": 63488 00:23:20.346 } 00:23:20.346 ] 00:23:20.346 }' 00:23:20.346 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.346 07:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.603 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:20.603 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:20.603 07:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:20.860 [2024-05-16 07:36:14.217334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:20.860 [2024-05-16 07:36:14.217398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.860 [2024-05-16 07:36:14.217422] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dc680 00:23:20.860 [2024-05-16 07:36:14.217430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.860 [2024-05-16 07:36:14.217516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.860 [2024-05-16 07:36:14.217524] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:20.860 [2024-05-16 07:36:14.217542] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:20.860 [2024-05-16 07:36:14.217549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:20.860 pt2 00:23:20.860 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:20.860 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:20.860 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:20.860 [2024-05-16 07:36:14.405333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:20.860 [2024-05-16 07:36:14.405394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.860 [2024-05-16 07:36:14.405410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dc400 00:23:20.860 [2024-05-16 07:36:14.405417] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.860 [2024-05-16 07:36:14.405475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.860 [2024-05-16 07:36:14.405482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:20.860 [2024-05-16 07:36:14.405496] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:20.860 [2024-05-16 07:36:14.405501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:20.860 [2024-05-16 07:36:14.405519] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b7db780 00:23:20.860 [2024-05-16 07:36:14.405523] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:20.860 [2024-05-16 07:36:14.405539] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b83ee20 00:23:20.860 [2024-05-16 07:36:14.405578] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b7db780 00:23:20.860 [2024-05-16 07:36:14.405581] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b7db780 00:23:20.860 [2024-05-16 07:36:14.405597] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.860 pt3 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.118 "name": "raid_bdev1", 00:23:21.118 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:21.118 "strip_size_kb": 0, 00:23:21.118 "state": "online", 00:23:21.118 "raid_level": "raid1", 00:23:21.118 "superblock": true, 00:23:21.118 "num_base_bdevs": 3, 00:23:21.118 "num_base_bdevs_discovered": 3, 00:23:21.118 "num_base_bdevs_operational": 3, 00:23:21.118 "base_bdevs_list": [ 00:23:21.118 { 00:23:21.118 "name": "pt1", 00:23:21.118 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:21.118 "is_configured": true, 00:23:21.118 "data_offset": 2048, 00:23:21.118 "data_size": 63488 00:23:21.118 }, 00:23:21.118 { 00:23:21.118 "name": "pt2", 00:23:21.118 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:21.118 "is_configured": true, 00:23:21.118 "data_offset": 2048, 00:23:21.118 "data_size": 63488 00:23:21.118 }, 00:23:21.118 { 00:23:21.118 "name": "pt3", 00:23:21.118 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:21.118 "is_configured": true, 00:23:21.118 "data_offset": 2048, 00:23:21.118 "data_size": 63488 00:23:21.118 } 00:23:21.118 ] 00:23:21.118 }' 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.118 07:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:21.376 07:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:21.634 [2024-05-16 07:36:15.133372] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:21.634 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:21.634 "name": "raid_bdev1", 00:23:21.634 "aliases": [ 00:23:21.634 "f5a52ccd-1356-11ef-8e8f-9dd684e56d79" 00:23:21.634 ], 00:23:21.634 "product_name": "Raid Volume", 00:23:21.634 "block_size": 512, 00:23:21.634 "num_blocks": 63488, 00:23:21.634 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:21.634 "assigned_rate_limits": { 00:23:21.634 "rw_ios_per_sec": 0, 00:23:21.634 "rw_mbytes_per_sec": 0, 00:23:21.634 "r_mbytes_per_sec": 0, 00:23:21.634 "w_mbytes_per_sec": 0 00:23:21.634 }, 00:23:21.634 "claimed": false, 00:23:21.634 "zoned": false, 00:23:21.634 "supported_io_types": { 00:23:21.634 "read": true, 00:23:21.634 "write": true, 00:23:21.634 "unmap": false, 00:23:21.634 "write_zeroes": true, 00:23:21.634 "flush": false, 00:23:21.634 "reset": true, 00:23:21.634 "compare": false, 00:23:21.634 "compare_and_write": false, 00:23:21.634 "abort": false, 00:23:21.634 "nvme_admin": false, 00:23:21.634 "nvme_io": false 00:23:21.634 }, 00:23:21.634 "memory_domains": [ 00:23:21.634 { 00:23:21.634 "dma_device_id": "system", 00:23:21.634 "dma_device_type": 1 00:23:21.634 }, 00:23:21.634 { 00:23:21.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.634 "dma_device_type": 2 00:23:21.634 }, 00:23:21.634 { 00:23:21.634 "dma_device_id": "system", 00:23:21.634 "dma_device_type": 1 00:23:21.634 }, 00:23:21.634 { 00:23:21.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.634 "dma_device_type": 2 00:23:21.634 }, 00:23:21.634 { 00:23:21.634 "dma_device_id": "system", 00:23:21.634 "dma_device_type": 1 00:23:21.634 }, 00:23:21.634 { 00:23:21.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.634 "dma_device_type": 2 00:23:21.634 } 00:23:21.634 ], 00:23:21.634 "driver_specific": { 00:23:21.634 "raid": { 00:23:21.635 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:21.635 "strip_size_kb": 0, 00:23:21.635 "state": "online", 00:23:21.635 "raid_level": "raid1", 00:23:21.635 "superblock": true, 00:23:21.635 "num_base_bdevs": 3, 00:23:21.635 "num_base_bdevs_discovered": 3, 00:23:21.635 "num_base_bdevs_operational": 3, 00:23:21.635 "base_bdevs_list": [ 00:23:21.635 { 00:23:21.635 "name": "pt1", 00:23:21.635 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:21.635 "is_configured": true, 00:23:21.635 "data_offset": 2048, 00:23:21.635 "data_size": 63488 00:23:21.635 }, 00:23:21.635 { 00:23:21.635 "name": "pt2", 00:23:21.635 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:21.635 "is_configured": true, 00:23:21.635 "data_offset": 2048, 00:23:21.635 "data_size": 63488 00:23:21.635 }, 00:23:21.635 { 00:23:21.635 "name": "pt3", 00:23:21.635 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:21.635 "is_configured": true, 00:23:21.635 "data_offset": 2048, 00:23:21.635 "data_size": 63488 00:23:21.635 } 00:23:21.635 ] 00:23:21.635 } 00:23:21.635 } 00:23:21.635 }' 00:23:21.635 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:21.635 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:23:21.635 pt2 00:23:21.635 pt3' 00:23:21.635 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:21.635 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:21.635 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:21.892 "name": "pt1", 00:23:21.892 "aliases": [ 00:23:21.892 "a5798faa-d2e2-2650-8188-2b578831c492" 00:23:21.892 ], 00:23:21.892 "product_name": "passthru", 00:23:21.892 "block_size": 512, 00:23:21.892 "num_blocks": 65536, 00:23:21.892 "uuid": "a5798faa-d2e2-2650-8188-2b578831c492", 00:23:21.892 "assigned_rate_limits": { 00:23:21.892 "rw_ios_per_sec": 0, 00:23:21.892 "rw_mbytes_per_sec": 0, 00:23:21.892 "r_mbytes_per_sec": 0, 00:23:21.892 "w_mbytes_per_sec": 0 00:23:21.892 }, 00:23:21.892 "claimed": true, 00:23:21.892 "claim_type": "exclusive_write", 00:23:21.892 "zoned": false, 00:23:21.892 "supported_io_types": { 00:23:21.892 "read": true, 00:23:21.892 "write": true, 00:23:21.892 "unmap": true, 00:23:21.892 "write_zeroes": true, 00:23:21.892 "flush": true, 00:23:21.892 "reset": true, 00:23:21.892 "compare": false, 00:23:21.892 "compare_and_write": false, 00:23:21.892 "abort": true, 00:23:21.892 "nvme_admin": false, 00:23:21.892 "nvme_io": false 00:23:21.892 }, 00:23:21.892 "memory_domains": [ 00:23:21.892 { 00:23:21.892 "dma_device_id": "system", 00:23:21.892 "dma_device_type": 1 00:23:21.892 }, 00:23:21.892 { 00:23:21.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.892 "dma_device_type": 2 00:23:21.892 } 00:23:21.892 ], 00:23:21.892 "driver_specific": { 00:23:21.892 "passthru": { 00:23:21.892 "name": "pt1", 00:23:21.892 "base_bdev_name": "malloc1" 00:23:21.892 } 00:23:21.892 } 00:23:21.892 }' 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:21.892 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:22.150 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:22.409 "name": "pt2", 00:23:22.409 "aliases": [ 00:23:22.409 "1e11641b-3b23-285d-9a84-bbae38d1be75" 00:23:22.409 ], 00:23:22.409 "product_name": "passthru", 00:23:22.409 "block_size": 512, 00:23:22.409 "num_blocks": 65536, 00:23:22.409 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:22.409 "assigned_rate_limits": { 00:23:22.409 "rw_ios_per_sec": 0, 00:23:22.409 "rw_mbytes_per_sec": 0, 00:23:22.409 "r_mbytes_per_sec": 0, 00:23:22.409 "w_mbytes_per_sec": 0 00:23:22.409 }, 00:23:22.409 "claimed": true, 00:23:22.409 "claim_type": "exclusive_write", 00:23:22.409 "zoned": false, 00:23:22.409 "supported_io_types": { 00:23:22.409 "read": true, 00:23:22.409 "write": true, 00:23:22.409 "unmap": true, 00:23:22.409 "write_zeroes": true, 00:23:22.409 "flush": true, 00:23:22.409 "reset": true, 00:23:22.409 "compare": false, 00:23:22.409 "compare_and_write": false, 00:23:22.409 "abort": true, 00:23:22.409 "nvme_admin": false, 00:23:22.409 "nvme_io": false 00:23:22.409 }, 00:23:22.409 "memory_domains": [ 00:23:22.409 { 00:23:22.409 "dma_device_id": "system", 00:23:22.409 "dma_device_type": 1 00:23:22.409 }, 00:23:22.409 { 00:23:22.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.409 "dma_device_type": 2 00:23:22.409 } 00:23:22.409 ], 00:23:22.409 "driver_specific": { 00:23:22.409 "passthru": { 00:23:22.409 "name": "pt2", 00:23:22.409 "base_bdev_name": "malloc2" 00:23:22.409 } 00:23:22.409 } 00:23:22.409 }' 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:22.409 07:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:22.667 "name": "pt3", 00:23:22.667 "aliases": [ 00:23:22.667 "2db03f12-d8ec-c855-a417-ce687f6f8230" 00:23:22.667 ], 00:23:22.667 "product_name": "passthru", 00:23:22.667 "block_size": 512, 00:23:22.667 "num_blocks": 65536, 00:23:22.667 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:22.667 "assigned_rate_limits": { 00:23:22.667 "rw_ios_per_sec": 0, 00:23:22.667 "rw_mbytes_per_sec": 0, 00:23:22.667 "r_mbytes_per_sec": 0, 00:23:22.667 "w_mbytes_per_sec": 0 00:23:22.667 }, 00:23:22.667 "claimed": true, 00:23:22.667 "claim_type": "exclusive_write", 00:23:22.667 "zoned": false, 00:23:22.667 "supported_io_types": { 00:23:22.667 "read": true, 00:23:22.667 "write": true, 00:23:22.667 "unmap": true, 00:23:22.667 "write_zeroes": true, 00:23:22.667 "flush": true, 00:23:22.667 "reset": true, 00:23:22.667 "compare": false, 00:23:22.667 "compare_and_write": false, 00:23:22.667 "abort": true, 00:23:22.667 "nvme_admin": false, 00:23:22.667 "nvme_io": false 00:23:22.667 }, 00:23:22.667 "memory_domains": [ 00:23:22.667 { 00:23:22.667 "dma_device_id": "system", 00:23:22.667 "dma_device_type": 1 00:23:22.667 }, 00:23:22.667 { 00:23:22.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.667 "dma_device_type": 2 00:23:22.667 } 00:23:22.667 ], 00:23:22.667 "driver_specific": { 00:23:22.667 "passthru": { 00:23:22.667 "name": "pt3", 00:23:22.667 "base_bdev_name": "malloc3" 00:23:22.667 } 00:23:22.667 } 00:23:22.667 }' 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:22.667 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:22.925 [2024-05-16 07:36:16.265371] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.925 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f5a52ccd-1356-11ef-8e8f-9dd684e56d79 '!=' f5a52ccd-1356-11ef-8e8f-9dd684e56d79 ']' 00:23:22.925 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:22.925 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:22.925 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:23:22.925 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:23.182 [2024-05-16 07:36:16.493351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.182 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.440 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.440 "name": "raid_bdev1", 00:23:23.440 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:23.440 "strip_size_kb": 0, 00:23:23.440 "state": "online", 00:23:23.440 "raid_level": "raid1", 00:23:23.440 "superblock": true, 00:23:23.440 "num_base_bdevs": 3, 00:23:23.440 "num_base_bdevs_discovered": 2, 00:23:23.440 "num_base_bdevs_operational": 2, 00:23:23.440 "base_bdevs_list": [ 00:23:23.440 { 00:23:23.440 "name": null, 00:23:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.440 "is_configured": false, 00:23:23.440 "data_offset": 2048, 00:23:23.440 "data_size": 63488 00:23:23.440 }, 00:23:23.440 { 00:23:23.440 "name": "pt2", 00:23:23.440 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:23.440 "is_configured": true, 00:23:23.440 "data_offset": 2048, 00:23:23.440 "data_size": 63488 00:23:23.440 }, 00:23:23.440 { 00:23:23.440 "name": "pt3", 00:23:23.440 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:23.440 "is_configured": true, 00:23:23.440 "data_offset": 2048, 00:23:23.440 "data_size": 63488 00:23:23.440 } 00:23:23.440 ] 00:23:23.440 }' 00:23:23.440 07:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.440 07:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.697 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:23.954 [2024-05-16 07:36:17.269359] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.954 [2024-05-16 07:36:17.269382] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.954 [2024-05-16 07:36:17.269401] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.954 [2024-05-16 07:36:17.269415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.954 [2024-05-16 07:36:17.269419] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7db780 name raid_bdev1, state offline 00:23:23.954 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.954 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:24.233 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:24.233 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:24.233 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:24.233 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:24.233 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:24.538 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:24.538 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:24.538 07:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:24.538 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:24.538 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:24.538 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:24.538 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:24.538 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:24.796 [2024-05-16 07:36:18.341444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:24.796 [2024-05-16 07:36:18.341498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.796 [2024-05-16 07:36:18.341523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dc400 00:23:24.796 [2024-05-16 07:36:18.341530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.796 [2024-05-16 07:36:18.342034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.796 [2024-05-16 07:36:18.342062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:24.796 [2024-05-16 07:36:18.342084] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:24.796 [2024-05-16 07:36:18.342093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:24.796 pt2 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.054 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.312 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.312 "name": "raid_bdev1", 00:23:25.312 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:25.312 "strip_size_kb": 0, 00:23:25.312 "state": "configuring", 00:23:25.312 "raid_level": "raid1", 00:23:25.312 "superblock": true, 00:23:25.312 "num_base_bdevs": 3, 00:23:25.312 "num_base_bdevs_discovered": 1, 00:23:25.312 "num_base_bdevs_operational": 2, 00:23:25.312 "base_bdevs_list": [ 00:23:25.312 { 00:23:25.312 "name": null, 00:23:25.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.312 "is_configured": false, 00:23:25.312 "data_offset": 2048, 00:23:25.312 "data_size": 63488 00:23:25.312 }, 00:23:25.312 { 00:23:25.312 "name": "pt2", 00:23:25.312 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:25.312 "is_configured": true, 00:23:25.312 "data_offset": 2048, 00:23:25.312 "data_size": 63488 00:23:25.312 }, 00:23:25.312 { 00:23:25.312 "name": null, 00:23:25.313 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:25.313 "is_configured": false, 00:23:25.313 "data_offset": 2048, 00:23:25.313 "data_size": 63488 00:23:25.313 } 00:23:25.313 ] 00:23:25.313 }' 00:23:25.313 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.313 07:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.570 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:25.570 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:25.570 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:23:25.570 07:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:25.828 [2024-05-16 07:36:19.213494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:25.828 [2024-05-16 07:36:19.213540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.828 [2024-05-16 07:36:19.213563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7db780 00:23:25.828 [2024-05-16 07:36:19.213570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.828 [2024-05-16 07:36:19.213645] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.828 [2024-05-16 07:36:19.213667] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:25.828 [2024-05-16 07:36:19.213683] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:25.828 [2024-05-16 07:36:19.213689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:25.828 [2024-05-16 07:36:19.213708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b7dc180 00:23:25.828 [2024-05-16 07:36:19.213711] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:25.828 [2024-05-16 07:36:19.213728] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b83ee20 00:23:25.828 [2024-05-16 07:36:19.213758] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b7dc180 00:23:25.828 [2024-05-16 07:36:19.213761] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b7dc180 00:23:25.828 [2024-05-16 07:36:19.213777] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.828 pt3 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.828 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.087 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.087 "name": "raid_bdev1", 00:23:26.087 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:26.087 "strip_size_kb": 0, 00:23:26.087 "state": "online", 00:23:26.087 "raid_level": "raid1", 00:23:26.087 "superblock": true, 00:23:26.087 "num_base_bdevs": 3, 00:23:26.087 "num_base_bdevs_discovered": 2, 00:23:26.087 "num_base_bdevs_operational": 2, 00:23:26.087 "base_bdevs_list": [ 00:23:26.087 { 00:23:26.087 "name": null, 00:23:26.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.087 "is_configured": false, 00:23:26.087 "data_offset": 2048, 00:23:26.087 "data_size": 63488 00:23:26.087 }, 00:23:26.087 { 00:23:26.087 "name": "pt2", 00:23:26.087 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:26.087 "is_configured": true, 00:23:26.087 "data_offset": 2048, 00:23:26.087 "data_size": 63488 00:23:26.087 }, 00:23:26.087 { 00:23:26.087 "name": "pt3", 00:23:26.087 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:26.087 "is_configured": true, 00:23:26.087 "data_offset": 2048, 00:23:26.087 "data_size": 63488 00:23:26.087 } 00:23:26.087 ] 00:23:26.087 }' 00:23:26.087 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.087 07:36:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.345 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:26.604 [2024-05-16 07:36:19.969539] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.604 [2024-05-16 07:36:19.969561] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.604 [2024-05-16 07:36:19.969594] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.604 [2024-05-16 07:36:19.969605] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.604 [2024-05-16 07:36:19.969609] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7dc180 name raid_bdev1, state offline 00:23:26.604 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.604 07:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:26.861 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:26.861 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:26.861 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:23:26.862 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:23:26.862 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:27.119 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:27.377 [2024-05-16 07:36:20.725582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:27.377 [2024-05-16 07:36:20.725630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.377 [2024-05-16 07:36:20.725670] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7db780 00:23:27.377 [2024-05-16 07:36:20.725678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.377 [2024-05-16 07:36:20.726170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.377 [2024-05-16 07:36:20.726198] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:27.377 [2024-05-16 07:36:20.726218] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:27.377 [2024-05-16 07:36:20.726227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:27.377 [2024-05-16 07:36:20.726250] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:27.377 [2024-05-16 07:36:20.726254] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.377 [2024-05-16 07:36:20.726258] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7dc180 name raid_bdev1, state configuring 00:23:27.377 [2024-05-16 07:36:20.726265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:27.377 pt1 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.377 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.636 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.636 "name": "raid_bdev1", 00:23:27.636 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:27.636 "strip_size_kb": 0, 00:23:27.636 "state": "configuring", 00:23:27.636 "raid_level": "raid1", 00:23:27.636 "superblock": true, 00:23:27.636 "num_base_bdevs": 3, 00:23:27.636 "num_base_bdevs_discovered": 1, 00:23:27.636 "num_base_bdevs_operational": 2, 00:23:27.636 "base_bdevs_list": [ 00:23:27.636 { 00:23:27.636 "name": null, 00:23:27.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.636 "is_configured": false, 00:23:27.636 "data_offset": 2048, 00:23:27.636 "data_size": 63488 00:23:27.636 }, 00:23:27.636 { 00:23:27.636 "name": "pt2", 00:23:27.636 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:27.636 "is_configured": true, 00:23:27.636 "data_offset": 2048, 00:23:27.636 "data_size": 63488 00:23:27.636 }, 00:23:27.636 { 00:23:27.636 "name": null, 00:23:27.636 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:27.636 "is_configured": false, 00:23:27.636 "data_offset": 2048, 00:23:27.636 "data_size": 63488 00:23:27.636 } 00:23:27.636 ] 00:23:27.636 }' 00:23:27.636 07:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.636 07:36:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.894 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:23:27.894 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:28.151 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:28.151 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.410 [2024-05-16 07:36:21.861656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.410 [2024-05-16 07:36:21.861705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.410 [2024-05-16 07:36:21.861731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b7dbc80 00:23:28.410 [2024-05-16 07:36:21.861739] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.410 [2024-05-16 07:36:21.861826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.410 [2024-05-16 07:36:21.861835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.410 [2024-05-16 07:36:21.861851] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:28.410 [2024-05-16 07:36:21.861858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:28.410 [2024-05-16 07:36:21.861877] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b7dc180 00:23:28.410 [2024-05-16 07:36:21.861881] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:28.410 [2024-05-16 07:36:21.861900] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b83ee20 00:23:28.410 [2024-05-16 07:36:21.861932] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b7dc180 00:23:28.410 [2024-05-16 07:36:21.861935] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b7dc180 00:23:28.410 [2024-05-16 07:36:21.861952] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.410 pt3 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.410 07:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.668 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.668 "name": "raid_bdev1", 00:23:28.668 "uuid": "f5a52ccd-1356-11ef-8e8f-9dd684e56d79", 00:23:28.668 "strip_size_kb": 0, 00:23:28.668 "state": "online", 00:23:28.668 "raid_level": "raid1", 00:23:28.668 "superblock": true, 00:23:28.668 "num_base_bdevs": 3, 00:23:28.668 "num_base_bdevs_discovered": 2, 00:23:28.668 "num_base_bdevs_operational": 2, 00:23:28.668 "base_bdevs_list": [ 00:23:28.668 { 00:23:28.668 "name": null, 00:23:28.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.668 "is_configured": false, 00:23:28.668 "data_offset": 2048, 00:23:28.668 "data_size": 63488 00:23:28.668 }, 00:23:28.668 { 00:23:28.668 "name": "pt2", 00:23:28.668 "uuid": "1e11641b-3b23-285d-9a84-bbae38d1be75", 00:23:28.668 "is_configured": true, 00:23:28.668 "data_offset": 2048, 00:23:28.668 "data_size": 63488 00:23:28.668 }, 00:23:28.668 { 00:23:28.668 "name": "pt3", 00:23:28.668 "uuid": "2db03f12-d8ec-c855-a417-ce687f6f8230", 00:23:28.668 "is_configured": true, 00:23:28.668 "data_offset": 2048, 00:23:28.668 "data_size": 63488 00:23:28.668 } 00:23:28.668 ] 00:23:28.668 }' 00:23:28.668 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.668 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.926 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:28.926 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:29.183 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:29.183 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:29.183 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:29.441 [2024-05-16 07:36:22.937755] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f5a52ccd-1356-11ef-8e8f-9dd684e56d79 '!=' f5a52ccd-1356-11ef-8e8f-9dd684e56d79 ']' 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 57094 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 57094 ']' 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 57094 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 57094 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:23:29.441 killing process with pid 57094 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57094' 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 57094 00:23:29.441 [2024-05-16 07:36:22.970121] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:29.441 [2024-05-16 07:36:22.970138] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.441 [2024-05-16 07:36:22.970150] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.441 [2024-05-16 07:36:22.970154] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b7dc180 name raid_bdev1, state offline 00:23:29.441 07:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 57094 00:23:29.441 [2024-05-16 07:36:22.984606] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:29.699 07:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:29.699 00:23:29.699 real 0m17.752s 00:23:29.699 user 0m32.201s 00:23:29.699 sys 0m2.591s 00:23:29.699 07:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:29.699 07:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.699 ************************************ 00:23:29.699 END TEST raid_superblock_test 00:23:29.699 ************************************ 00:23:29.699 07:36:23 bdev_raid -- bdev/bdev_raid.sh@801 -- # for n in {2..4} 00:23:29.699 07:36:23 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:23:29.700 07:36:23 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:23:29.700 07:36:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:29.700 07:36:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:29.700 07:36:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:29.700 ************************************ 00:23:29.700 START TEST raid_state_function_test 00:23:29.700 ************************************ 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=57642 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 57642' 00:23:29.700 Process raid pid: 57642 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 57642 /var/tmp/spdk-raid.sock 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 57642 ']' 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:29.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:29.700 07:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.700 [2024-05-16 07:36:23.216945] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:23:29.700 [2024-05-16 07:36:23.217188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:30.267 EAL: TSC is not safe to use in SMP mode 00:23:30.267 EAL: TSC is not invariant 00:23:30.267 [2024-05-16 07:36:23.737115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.524 [2024-05-16 07:36:23.834076] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:30.524 [2024-05-16 07:36:23.836651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.524 [2024-05-16 07:36:23.837537] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.524 [2024-05-16 07:36:23.837552] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:30.781 07:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:30.781 07:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:23:30.781 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:31.039 [2024-05-16 07:36:24.485518] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:31.039 [2024-05-16 07:36:24.485566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:31.039 [2024-05-16 07:36:24.485570] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.039 [2024-05-16 07:36:24.485577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.039 [2024-05-16 07:36:24.485580] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:31.039 [2024-05-16 07:36:24.485586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:31.039 [2024-05-16 07:36:24.485588] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:31.039 [2024-05-16 07:36:24.485594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.039 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.297 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.297 "name": "Existed_Raid", 00:23:31.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.297 "strip_size_kb": 64, 00:23:31.297 "state": "configuring", 00:23:31.297 "raid_level": "raid0", 00:23:31.297 "superblock": false, 00:23:31.297 "num_base_bdevs": 4, 00:23:31.297 "num_base_bdevs_discovered": 0, 00:23:31.297 "num_base_bdevs_operational": 4, 00:23:31.297 "base_bdevs_list": [ 00:23:31.297 { 00:23:31.297 "name": "BaseBdev1", 00:23:31.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.297 "is_configured": false, 00:23:31.297 "data_offset": 0, 00:23:31.297 "data_size": 0 00:23:31.297 }, 00:23:31.297 { 00:23:31.297 "name": "BaseBdev2", 00:23:31.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.297 "is_configured": false, 00:23:31.297 "data_offset": 0, 00:23:31.297 "data_size": 0 00:23:31.297 }, 00:23:31.297 { 00:23:31.297 "name": "BaseBdev3", 00:23:31.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.297 "is_configured": false, 00:23:31.297 "data_offset": 0, 00:23:31.297 "data_size": 0 00:23:31.297 }, 00:23:31.297 { 00:23:31.297 "name": "BaseBdev4", 00:23:31.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.297 "is_configured": false, 00:23:31.297 "data_offset": 0, 00:23:31.297 "data_size": 0 00:23:31.297 } 00:23:31.297 ] 00:23:31.297 }' 00:23:31.297 07:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.297 07:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.598 07:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:31.855 [2024-05-16 07:36:25.385528] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.855 [2024-05-16 07:36:25.385552] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5fc500 name Existed_Raid, state configuring 00:23:32.113 07:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:32.113 [2024-05-16 07:36:25.653567] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:32.113 [2024-05-16 07:36:25.653619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:32.113 [2024-05-16 07:36:25.653624] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:32.113 [2024-05-16 07:36:25.653632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:32.113 [2024-05-16 07:36:25.653635] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:32.113 [2024-05-16 07:36:25.653642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:32.113 [2024-05-16 07:36:25.653645] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:32.113 [2024-05-16 07:36:25.653651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:32.113 07:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:32.371 [2024-05-16 07:36:25.882523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:32.371 BaseBdev1 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:32.371 07:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.628 07:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:32.886 [ 00:23:32.886 { 00:23:32.886 "name": "BaseBdev1", 00:23:32.886 "aliases": [ 00:23:32.886 "003c6794-1357-11ef-8e8f-9dd684e56d79" 00:23:32.886 ], 00:23:32.886 "product_name": "Malloc disk", 00:23:32.886 "block_size": 512, 00:23:32.886 "num_blocks": 65536, 00:23:32.886 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:32.886 "assigned_rate_limits": { 00:23:32.886 "rw_ios_per_sec": 0, 00:23:32.886 "rw_mbytes_per_sec": 0, 00:23:32.886 "r_mbytes_per_sec": 0, 00:23:32.886 "w_mbytes_per_sec": 0 00:23:32.886 }, 00:23:32.886 "claimed": true, 00:23:32.886 "claim_type": "exclusive_write", 00:23:32.886 "zoned": false, 00:23:32.886 "supported_io_types": { 00:23:32.886 "read": true, 00:23:32.886 "write": true, 00:23:32.886 "unmap": true, 00:23:32.886 "write_zeroes": true, 00:23:32.886 "flush": true, 00:23:32.886 "reset": true, 00:23:32.886 "compare": false, 00:23:32.886 "compare_and_write": false, 00:23:32.886 "abort": true, 00:23:32.886 "nvme_admin": false, 00:23:32.886 "nvme_io": false 00:23:32.886 }, 00:23:32.886 "memory_domains": [ 00:23:32.886 { 00:23:32.886 "dma_device_id": "system", 00:23:32.886 "dma_device_type": 1 00:23:32.886 }, 00:23:32.886 { 00:23:32.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.886 "dma_device_type": 2 00:23:32.886 } 00:23:32.886 ], 00:23:32.886 "driver_specific": {} 00:23:32.886 } 00:23:32.886 ] 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.886 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.145 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.145 "name": "Existed_Raid", 00:23:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.145 "strip_size_kb": 64, 00:23:33.145 "state": "configuring", 00:23:33.145 "raid_level": "raid0", 00:23:33.145 "superblock": false, 00:23:33.145 "num_base_bdevs": 4, 00:23:33.145 "num_base_bdevs_discovered": 1, 00:23:33.145 "num_base_bdevs_operational": 4, 00:23:33.145 "base_bdevs_list": [ 00:23:33.145 { 00:23:33.145 "name": "BaseBdev1", 00:23:33.145 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:33.145 "is_configured": true, 00:23:33.145 "data_offset": 0, 00:23:33.145 "data_size": 65536 00:23:33.145 }, 00:23:33.145 { 00:23:33.145 "name": "BaseBdev2", 00:23:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.145 "is_configured": false, 00:23:33.145 "data_offset": 0, 00:23:33.145 "data_size": 0 00:23:33.145 }, 00:23:33.145 { 00:23:33.145 "name": "BaseBdev3", 00:23:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.146 "is_configured": false, 00:23:33.146 "data_offset": 0, 00:23:33.146 "data_size": 0 00:23:33.146 }, 00:23:33.146 { 00:23:33.146 "name": "BaseBdev4", 00:23:33.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.146 "is_configured": false, 00:23:33.146 "data_offset": 0, 00:23:33.146 "data_size": 0 00:23:33.146 } 00:23:33.146 ] 00:23:33.146 }' 00:23:33.146 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.146 07:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.403 07:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:33.662 [2024-05-16 07:36:27.129662] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:33.662 [2024-05-16 07:36:27.129706] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5fc500 name Existed_Raid, state configuring 00:23:33.662 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:33.920 [2024-05-16 07:36:27.389690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:33.920 [2024-05-16 07:36:27.390355] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:33.920 [2024-05-16 07:36:27.390397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:33.920 [2024-05-16 07:36:27.390402] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:33.920 [2024-05-16 07:36:27.390409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:33.920 [2024-05-16 07:36:27.390413] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:33.920 [2024-05-16 07:36:27.390419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.920 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.178 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.178 "name": "Existed_Raid", 00:23:34.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.178 "strip_size_kb": 64, 00:23:34.178 "state": "configuring", 00:23:34.178 "raid_level": "raid0", 00:23:34.178 "superblock": false, 00:23:34.178 "num_base_bdevs": 4, 00:23:34.178 "num_base_bdevs_discovered": 1, 00:23:34.178 "num_base_bdevs_operational": 4, 00:23:34.178 "base_bdevs_list": [ 00:23:34.178 { 00:23:34.178 "name": "BaseBdev1", 00:23:34.179 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:34.179 "is_configured": true, 00:23:34.179 "data_offset": 0, 00:23:34.179 "data_size": 65536 00:23:34.179 }, 00:23:34.179 { 00:23:34.179 "name": "BaseBdev2", 00:23:34.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.179 "is_configured": false, 00:23:34.179 "data_offset": 0, 00:23:34.179 "data_size": 0 00:23:34.179 }, 00:23:34.179 { 00:23:34.179 "name": "BaseBdev3", 00:23:34.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.179 "is_configured": false, 00:23:34.179 "data_offset": 0, 00:23:34.179 "data_size": 0 00:23:34.179 }, 00:23:34.179 { 00:23:34.179 "name": "BaseBdev4", 00:23:34.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.179 "is_configured": false, 00:23:34.179 "data_offset": 0, 00:23:34.179 "data_size": 0 00:23:34.179 } 00:23:34.179 ] 00:23:34.179 }' 00:23:34.179 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.179 07:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.436 07:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.694 [2024-05-16 07:36:28.181812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.694 BaseBdev2 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:34.694 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.953 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:35.211 [ 00:23:35.211 { 00:23:35.211 "name": "BaseBdev2", 00:23:35.211 "aliases": [ 00:23:35.211 "019b5fd5-1357-11ef-8e8f-9dd684e56d79" 00:23:35.211 ], 00:23:35.211 "product_name": "Malloc disk", 00:23:35.211 "block_size": 512, 00:23:35.211 "num_blocks": 65536, 00:23:35.211 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:35.211 "assigned_rate_limits": { 00:23:35.211 "rw_ios_per_sec": 0, 00:23:35.211 "rw_mbytes_per_sec": 0, 00:23:35.211 "r_mbytes_per_sec": 0, 00:23:35.211 "w_mbytes_per_sec": 0 00:23:35.211 }, 00:23:35.211 "claimed": true, 00:23:35.211 "claim_type": "exclusive_write", 00:23:35.211 "zoned": false, 00:23:35.211 "supported_io_types": { 00:23:35.211 "read": true, 00:23:35.211 "write": true, 00:23:35.211 "unmap": true, 00:23:35.211 "write_zeroes": true, 00:23:35.211 "flush": true, 00:23:35.211 "reset": true, 00:23:35.211 "compare": false, 00:23:35.211 "compare_and_write": false, 00:23:35.211 "abort": true, 00:23:35.211 "nvme_admin": false, 00:23:35.211 "nvme_io": false 00:23:35.211 }, 00:23:35.211 "memory_domains": [ 00:23:35.211 { 00:23:35.211 "dma_device_id": "system", 00:23:35.211 "dma_device_type": 1 00:23:35.211 }, 00:23:35.211 { 00:23:35.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.211 "dma_device_type": 2 00:23:35.211 } 00:23:35.211 ], 00:23:35.211 "driver_specific": {} 00:23:35.211 } 00:23:35.211 ] 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.211 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.505 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.505 "name": "Existed_Raid", 00:23:35.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.505 "strip_size_kb": 64, 00:23:35.505 "state": "configuring", 00:23:35.505 "raid_level": "raid0", 00:23:35.505 "superblock": false, 00:23:35.505 "num_base_bdevs": 4, 00:23:35.505 "num_base_bdevs_discovered": 2, 00:23:35.505 "num_base_bdevs_operational": 4, 00:23:35.505 "base_bdevs_list": [ 00:23:35.505 { 00:23:35.505 "name": "BaseBdev1", 00:23:35.505 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:35.505 "is_configured": true, 00:23:35.505 "data_offset": 0, 00:23:35.505 "data_size": 65536 00:23:35.505 }, 00:23:35.505 { 00:23:35.505 "name": "BaseBdev2", 00:23:35.505 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:35.505 "is_configured": true, 00:23:35.505 "data_offset": 0, 00:23:35.505 "data_size": 65536 00:23:35.505 }, 00:23:35.505 { 00:23:35.505 "name": "BaseBdev3", 00:23:35.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.505 "is_configured": false, 00:23:35.505 "data_offset": 0, 00:23:35.505 "data_size": 0 00:23:35.505 }, 00:23:35.505 { 00:23:35.505 "name": "BaseBdev4", 00:23:35.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.505 "is_configured": false, 00:23:35.505 "data_offset": 0, 00:23:35.505 "data_size": 0 00:23:35.505 } 00:23:35.505 ] 00:23:35.505 }' 00:23:35.505 07:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.505 07:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.763 07:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:36.020 [2024-05-16 07:36:29.533873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.020 BaseBdev3 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:36.020 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.585 07:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:36.842 [ 00:23:36.842 { 00:23:36.842 "name": "BaseBdev3", 00:23:36.842 "aliases": [ 00:23:36.842 "0269aee4-1357-11ef-8e8f-9dd684e56d79" 00:23:36.842 ], 00:23:36.842 "product_name": "Malloc disk", 00:23:36.842 "block_size": 512, 00:23:36.842 "num_blocks": 65536, 00:23:36.842 "uuid": "0269aee4-1357-11ef-8e8f-9dd684e56d79", 00:23:36.842 "assigned_rate_limits": { 00:23:36.842 "rw_ios_per_sec": 0, 00:23:36.842 "rw_mbytes_per_sec": 0, 00:23:36.842 "r_mbytes_per_sec": 0, 00:23:36.842 "w_mbytes_per_sec": 0 00:23:36.842 }, 00:23:36.842 "claimed": true, 00:23:36.842 "claim_type": "exclusive_write", 00:23:36.842 "zoned": false, 00:23:36.842 "supported_io_types": { 00:23:36.842 "read": true, 00:23:36.842 "write": true, 00:23:36.842 "unmap": true, 00:23:36.842 "write_zeroes": true, 00:23:36.842 "flush": true, 00:23:36.842 "reset": true, 00:23:36.842 "compare": false, 00:23:36.842 "compare_and_write": false, 00:23:36.842 "abort": true, 00:23:36.842 "nvme_admin": false, 00:23:36.842 "nvme_io": false 00:23:36.842 }, 00:23:36.842 "memory_domains": [ 00:23:36.842 { 00:23:36.842 "dma_device_id": "system", 00:23:36.842 "dma_device_type": 1 00:23:36.842 }, 00:23:36.842 { 00:23:36.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.842 "dma_device_type": 2 00:23:36.843 } 00:23:36.843 ], 00:23:36.843 "driver_specific": {} 00:23:36.843 } 00:23:36.843 ] 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.843 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.100 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.100 "name": "Existed_Raid", 00:23:37.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.100 "strip_size_kb": 64, 00:23:37.100 "state": "configuring", 00:23:37.100 "raid_level": "raid0", 00:23:37.100 "superblock": false, 00:23:37.100 "num_base_bdevs": 4, 00:23:37.100 "num_base_bdevs_discovered": 3, 00:23:37.100 "num_base_bdevs_operational": 4, 00:23:37.100 "base_bdevs_list": [ 00:23:37.100 { 00:23:37.100 "name": "BaseBdev1", 00:23:37.100 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:37.100 "is_configured": true, 00:23:37.100 "data_offset": 0, 00:23:37.100 "data_size": 65536 00:23:37.100 }, 00:23:37.100 { 00:23:37.100 "name": "BaseBdev2", 00:23:37.100 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:37.100 "is_configured": true, 00:23:37.100 "data_offset": 0, 00:23:37.100 "data_size": 65536 00:23:37.100 }, 00:23:37.100 { 00:23:37.100 "name": "BaseBdev3", 00:23:37.100 "uuid": "0269aee4-1357-11ef-8e8f-9dd684e56d79", 00:23:37.100 "is_configured": true, 00:23:37.100 "data_offset": 0, 00:23:37.100 "data_size": 65536 00:23:37.100 }, 00:23:37.100 { 00:23:37.100 "name": "BaseBdev4", 00:23:37.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.100 "is_configured": false, 00:23:37.100 "data_offset": 0, 00:23:37.100 "data_size": 0 00:23:37.100 } 00:23:37.100 ] 00:23:37.100 }' 00:23:37.100 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.100 07:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.357 07:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:37.614 [2024-05-16 07:36:31.057907] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:37.614 [2024-05-16 07:36:31.057931] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5fca00 00:23:37.614 [2024-05-16 07:36:31.057935] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:37.614 [2024-05-16 07:36:31.057960] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b65fec0 00:23:37.614 [2024-05-16 07:36:31.058037] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5fca00 00:23:37.614 [2024-05-16 07:36:31.058040] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5fca00 00:23:37.614 [2024-05-16 07:36:31.058065] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.614 BaseBdev4 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:37.614 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:37.871 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:38.127 [ 00:23:38.127 { 00:23:38.127 "name": "BaseBdev4", 00:23:38.127 "aliases": [ 00:23:38.127 "03523c0f-1357-11ef-8e8f-9dd684e56d79" 00:23:38.127 ], 00:23:38.127 "product_name": "Malloc disk", 00:23:38.127 "block_size": 512, 00:23:38.127 "num_blocks": 65536, 00:23:38.127 "uuid": "03523c0f-1357-11ef-8e8f-9dd684e56d79", 00:23:38.127 "assigned_rate_limits": { 00:23:38.127 "rw_ios_per_sec": 0, 00:23:38.127 "rw_mbytes_per_sec": 0, 00:23:38.127 "r_mbytes_per_sec": 0, 00:23:38.127 "w_mbytes_per_sec": 0 00:23:38.127 }, 00:23:38.127 "claimed": true, 00:23:38.127 "claim_type": "exclusive_write", 00:23:38.127 "zoned": false, 00:23:38.127 "supported_io_types": { 00:23:38.127 "read": true, 00:23:38.127 "write": true, 00:23:38.127 "unmap": true, 00:23:38.127 "write_zeroes": true, 00:23:38.127 "flush": true, 00:23:38.127 "reset": true, 00:23:38.127 "compare": false, 00:23:38.127 "compare_and_write": false, 00:23:38.127 "abort": true, 00:23:38.127 "nvme_admin": false, 00:23:38.127 "nvme_io": false 00:23:38.127 }, 00:23:38.127 "memory_domains": [ 00:23:38.127 { 00:23:38.127 "dma_device_id": "system", 00:23:38.127 "dma_device_type": 1 00:23:38.127 }, 00:23:38.127 { 00:23:38.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.127 "dma_device_type": 2 00:23:38.127 } 00:23:38.127 ], 00:23:38.127 "driver_specific": {} 00:23:38.127 } 00:23:38.127 ] 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.127 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.128 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.384 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.384 "name": "Existed_Raid", 00:23:38.384 "uuid": "03524106-1357-11ef-8e8f-9dd684e56d79", 00:23:38.384 "strip_size_kb": 64, 00:23:38.384 "state": "online", 00:23:38.384 "raid_level": "raid0", 00:23:38.385 "superblock": false, 00:23:38.385 "num_base_bdevs": 4, 00:23:38.385 "num_base_bdevs_discovered": 4, 00:23:38.385 "num_base_bdevs_operational": 4, 00:23:38.385 "base_bdevs_list": [ 00:23:38.385 { 00:23:38.385 "name": "BaseBdev1", 00:23:38.385 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:38.385 "is_configured": true, 00:23:38.385 "data_offset": 0, 00:23:38.385 "data_size": 65536 00:23:38.385 }, 00:23:38.385 { 00:23:38.385 "name": "BaseBdev2", 00:23:38.385 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:38.385 "is_configured": true, 00:23:38.385 "data_offset": 0, 00:23:38.385 "data_size": 65536 00:23:38.385 }, 00:23:38.385 { 00:23:38.385 "name": "BaseBdev3", 00:23:38.385 "uuid": "0269aee4-1357-11ef-8e8f-9dd684e56d79", 00:23:38.385 "is_configured": true, 00:23:38.385 "data_offset": 0, 00:23:38.385 "data_size": 65536 00:23:38.385 }, 00:23:38.385 { 00:23:38.385 "name": "BaseBdev4", 00:23:38.385 "uuid": "03523c0f-1357-11ef-8e8f-9dd684e56d79", 00:23:38.385 "is_configured": true, 00:23:38.385 "data_offset": 0, 00:23:38.385 "data_size": 65536 00:23:38.385 } 00:23:38.385 ] 00:23:38.385 }' 00:23:38.385 07:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.385 07:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:38.950 [2024-05-16 07:36:32.441930] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:38.950 "name": "Existed_Raid", 00:23:38.950 "aliases": [ 00:23:38.950 "03524106-1357-11ef-8e8f-9dd684e56d79" 00:23:38.950 ], 00:23:38.950 "product_name": "Raid Volume", 00:23:38.950 "block_size": 512, 00:23:38.950 "num_blocks": 262144, 00:23:38.950 "uuid": "03524106-1357-11ef-8e8f-9dd684e56d79", 00:23:38.950 "assigned_rate_limits": { 00:23:38.950 "rw_ios_per_sec": 0, 00:23:38.950 "rw_mbytes_per_sec": 0, 00:23:38.950 "r_mbytes_per_sec": 0, 00:23:38.950 "w_mbytes_per_sec": 0 00:23:38.950 }, 00:23:38.950 "claimed": false, 00:23:38.950 "zoned": false, 00:23:38.950 "supported_io_types": { 00:23:38.950 "read": true, 00:23:38.950 "write": true, 00:23:38.950 "unmap": true, 00:23:38.950 "write_zeroes": true, 00:23:38.950 "flush": true, 00:23:38.950 "reset": true, 00:23:38.950 "compare": false, 00:23:38.950 "compare_and_write": false, 00:23:38.950 "abort": false, 00:23:38.950 "nvme_admin": false, 00:23:38.950 "nvme_io": false 00:23:38.950 }, 00:23:38.950 "memory_domains": [ 00:23:38.950 { 00:23:38.950 "dma_device_id": "system", 00:23:38.950 "dma_device_type": 1 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.950 "dma_device_type": 2 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "system", 00:23:38.950 "dma_device_type": 1 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.950 "dma_device_type": 2 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "system", 00:23:38.950 "dma_device_type": 1 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.950 "dma_device_type": 2 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "system", 00:23:38.950 "dma_device_type": 1 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.950 "dma_device_type": 2 00:23:38.950 } 00:23:38.950 ], 00:23:38.950 "driver_specific": { 00:23:38.950 "raid": { 00:23:38.950 "uuid": "03524106-1357-11ef-8e8f-9dd684e56d79", 00:23:38.950 "strip_size_kb": 64, 00:23:38.950 "state": "online", 00:23:38.950 "raid_level": "raid0", 00:23:38.950 "superblock": false, 00:23:38.950 "num_base_bdevs": 4, 00:23:38.950 "num_base_bdevs_discovered": 4, 00:23:38.950 "num_base_bdevs_operational": 4, 00:23:38.950 "base_bdevs_list": [ 00:23:38.950 { 00:23:38.950 "name": "BaseBdev1", 00:23:38.950 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:38.950 "is_configured": true, 00:23:38.950 "data_offset": 0, 00:23:38.950 "data_size": 65536 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "name": "BaseBdev2", 00:23:38.950 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:38.950 "is_configured": true, 00:23:38.950 "data_offset": 0, 00:23:38.950 "data_size": 65536 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "name": "BaseBdev3", 00:23:38.950 "uuid": "0269aee4-1357-11ef-8e8f-9dd684e56d79", 00:23:38.950 "is_configured": true, 00:23:38.950 "data_offset": 0, 00:23:38.950 "data_size": 65536 00:23:38.950 }, 00:23:38.950 { 00:23:38.950 "name": "BaseBdev4", 00:23:38.950 "uuid": "03523c0f-1357-11ef-8e8f-9dd684e56d79", 00:23:38.950 "is_configured": true, 00:23:38.950 "data_offset": 0, 00:23:38.950 "data_size": 65536 00:23:38.950 } 00:23:38.950 ] 00:23:38.950 } 00:23:38.950 } 00:23:38.950 }' 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:23:38.950 BaseBdev2 00:23:38.950 BaseBdev3 00:23:38.950 BaseBdev4' 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:38.950 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:39.207 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:39.207 "name": "BaseBdev1", 00:23:39.207 "aliases": [ 00:23:39.207 "003c6794-1357-11ef-8e8f-9dd684e56d79" 00:23:39.207 ], 00:23:39.207 "product_name": "Malloc disk", 00:23:39.207 "block_size": 512, 00:23:39.207 "num_blocks": 65536, 00:23:39.207 "uuid": "003c6794-1357-11ef-8e8f-9dd684e56d79", 00:23:39.207 "assigned_rate_limits": { 00:23:39.207 "rw_ios_per_sec": 0, 00:23:39.207 "rw_mbytes_per_sec": 0, 00:23:39.207 "r_mbytes_per_sec": 0, 00:23:39.207 "w_mbytes_per_sec": 0 00:23:39.207 }, 00:23:39.207 "claimed": true, 00:23:39.207 "claim_type": "exclusive_write", 00:23:39.207 "zoned": false, 00:23:39.207 "supported_io_types": { 00:23:39.207 "read": true, 00:23:39.207 "write": true, 00:23:39.207 "unmap": true, 00:23:39.207 "write_zeroes": true, 00:23:39.207 "flush": true, 00:23:39.207 "reset": true, 00:23:39.207 "compare": false, 00:23:39.207 "compare_and_write": false, 00:23:39.207 "abort": true, 00:23:39.207 "nvme_admin": false, 00:23:39.207 "nvme_io": false 00:23:39.207 }, 00:23:39.207 "memory_domains": [ 00:23:39.207 { 00:23:39.207 "dma_device_id": "system", 00:23:39.207 "dma_device_type": 1 00:23:39.207 }, 00:23:39.207 { 00:23:39.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.207 "dma_device_type": 2 00:23:39.207 } 00:23:39.208 ], 00:23:39.208 "driver_specific": {} 00:23:39.208 }' 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:39.208 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:39.465 "name": "BaseBdev2", 00:23:39.465 "aliases": [ 00:23:39.465 "019b5fd5-1357-11ef-8e8f-9dd684e56d79" 00:23:39.465 ], 00:23:39.465 "product_name": "Malloc disk", 00:23:39.465 "block_size": 512, 00:23:39.465 "num_blocks": 65536, 00:23:39.465 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:39.465 "assigned_rate_limits": { 00:23:39.465 "rw_ios_per_sec": 0, 00:23:39.465 "rw_mbytes_per_sec": 0, 00:23:39.465 "r_mbytes_per_sec": 0, 00:23:39.465 "w_mbytes_per_sec": 0 00:23:39.465 }, 00:23:39.465 "claimed": true, 00:23:39.465 "claim_type": "exclusive_write", 00:23:39.465 "zoned": false, 00:23:39.465 "supported_io_types": { 00:23:39.465 "read": true, 00:23:39.465 "write": true, 00:23:39.465 "unmap": true, 00:23:39.465 "write_zeroes": true, 00:23:39.465 "flush": true, 00:23:39.465 "reset": true, 00:23:39.465 "compare": false, 00:23:39.465 "compare_and_write": false, 00:23:39.465 "abort": true, 00:23:39.465 "nvme_admin": false, 00:23:39.465 "nvme_io": false 00:23:39.465 }, 00:23:39.465 "memory_domains": [ 00:23:39.465 { 00:23:39.465 "dma_device_id": "system", 00:23:39.465 "dma_device_type": 1 00:23:39.465 }, 00:23:39.465 { 00:23:39.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.465 "dma_device_type": 2 00:23:39.465 } 00:23:39.465 ], 00:23:39.465 "driver_specific": {} 00:23:39.465 }' 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.465 07:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.465 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:39.465 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:39.465 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:39.465 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.465 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:39.724 "name": "BaseBdev3", 00:23:39.724 "aliases": [ 00:23:39.724 "0269aee4-1357-11ef-8e8f-9dd684e56d79" 00:23:39.724 ], 00:23:39.724 "product_name": "Malloc disk", 00:23:39.724 "block_size": 512, 00:23:39.724 "num_blocks": 65536, 00:23:39.724 "uuid": "0269aee4-1357-11ef-8e8f-9dd684e56d79", 00:23:39.724 "assigned_rate_limits": { 00:23:39.724 "rw_ios_per_sec": 0, 00:23:39.724 "rw_mbytes_per_sec": 0, 00:23:39.724 "r_mbytes_per_sec": 0, 00:23:39.724 "w_mbytes_per_sec": 0 00:23:39.724 }, 00:23:39.724 "claimed": true, 00:23:39.724 "claim_type": "exclusive_write", 00:23:39.724 "zoned": false, 00:23:39.724 "supported_io_types": { 00:23:39.724 "read": true, 00:23:39.724 "write": true, 00:23:39.724 "unmap": true, 00:23:39.724 "write_zeroes": true, 00:23:39.724 "flush": true, 00:23:39.724 "reset": true, 00:23:39.724 "compare": false, 00:23:39.724 "compare_and_write": false, 00:23:39.724 "abort": true, 00:23:39.724 "nvme_admin": false, 00:23:39.724 "nvme_io": false 00:23:39.724 }, 00:23:39.724 "memory_domains": [ 00:23:39.724 { 00:23:39.724 "dma_device_id": "system", 00:23:39.724 "dma_device_type": 1 00:23:39.724 }, 00:23:39.724 { 00:23:39.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.724 "dma_device_type": 2 00:23:39.724 } 00:23:39.724 ], 00:23:39.724 "driver_specific": {} 00:23:39.724 }' 00:23:39.724 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:39.981 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:39.982 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:39.982 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:40.240 "name": "BaseBdev4", 00:23:40.240 "aliases": [ 00:23:40.240 "03523c0f-1357-11ef-8e8f-9dd684e56d79" 00:23:40.240 ], 00:23:40.240 "product_name": "Malloc disk", 00:23:40.240 "block_size": 512, 00:23:40.240 "num_blocks": 65536, 00:23:40.240 "uuid": "03523c0f-1357-11ef-8e8f-9dd684e56d79", 00:23:40.240 "assigned_rate_limits": { 00:23:40.240 "rw_ios_per_sec": 0, 00:23:40.240 "rw_mbytes_per_sec": 0, 00:23:40.240 "r_mbytes_per_sec": 0, 00:23:40.240 "w_mbytes_per_sec": 0 00:23:40.240 }, 00:23:40.240 "claimed": true, 00:23:40.240 "claim_type": "exclusive_write", 00:23:40.240 "zoned": false, 00:23:40.240 "supported_io_types": { 00:23:40.240 "read": true, 00:23:40.240 "write": true, 00:23:40.240 "unmap": true, 00:23:40.240 "write_zeroes": true, 00:23:40.240 "flush": true, 00:23:40.240 "reset": true, 00:23:40.240 "compare": false, 00:23:40.240 "compare_and_write": false, 00:23:40.240 "abort": true, 00:23:40.240 "nvme_admin": false, 00:23:40.240 "nvme_io": false 00:23:40.240 }, 00:23:40.240 "memory_domains": [ 00:23:40.240 { 00:23:40.240 "dma_device_id": "system", 00:23:40.240 "dma_device_type": 1 00:23:40.240 }, 00:23:40.240 { 00:23:40.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.240 "dma_device_type": 2 00:23:40.240 } 00:23:40.240 ], 00:23:40.240 "driver_specific": {} 00:23:40.240 }' 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:40.240 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:40.496 [2024-05-16 07:36:33.945961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:40.496 [2024-05-16 07:36:33.945989] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.496 [2024-05-16 07:36:33.946002] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.496 07:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.753 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.753 "name": "Existed_Raid", 00:23:40.753 "uuid": "03524106-1357-11ef-8e8f-9dd684e56d79", 00:23:40.753 "strip_size_kb": 64, 00:23:40.753 "state": "offline", 00:23:40.753 "raid_level": "raid0", 00:23:40.753 "superblock": false, 00:23:40.753 "num_base_bdevs": 4, 00:23:40.753 "num_base_bdevs_discovered": 3, 00:23:40.753 "num_base_bdevs_operational": 3, 00:23:40.753 "base_bdevs_list": [ 00:23:40.753 { 00:23:40.753 "name": null, 00:23:40.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.753 "is_configured": false, 00:23:40.753 "data_offset": 0, 00:23:40.753 "data_size": 65536 00:23:40.753 }, 00:23:40.753 { 00:23:40.753 "name": "BaseBdev2", 00:23:40.753 "uuid": "019b5fd5-1357-11ef-8e8f-9dd684e56d79", 00:23:40.753 "is_configured": true, 00:23:40.753 "data_offset": 0, 00:23:40.753 "data_size": 65536 00:23:40.753 }, 00:23:40.753 { 00:23:40.753 "name": "BaseBdev3", 00:23:40.753 "uuid": "0269aee4-1357-11ef-8e8f-9dd684e56d79", 00:23:40.753 "is_configured": true, 00:23:40.753 "data_offset": 0, 00:23:40.753 "data_size": 65536 00:23:40.753 }, 00:23:40.753 { 00:23:40.753 "name": "BaseBdev4", 00:23:40.753 "uuid": "03523c0f-1357-11ef-8e8f-9dd684e56d79", 00:23:40.753 "is_configured": true, 00:23:40.753 "data_offset": 0, 00:23:40.753 "data_size": 65536 00:23:40.753 } 00:23:40.753 ] 00:23:40.753 }' 00:23:40.753 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.753 07:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.010 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:41.010 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:41.010 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.010 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:41.266 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:41.266 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:41.266 07:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:41.523 [2024-05-16 07:36:35.070849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:41.781 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:41.781 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:41.781 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.781 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:42.038 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:42.038 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:42.038 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:42.296 [2024-05-16 07:36:35.627590] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:42.296 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:42.296 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:42.296 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.296 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:23:42.553 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:23:42.553 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:42.553 07:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:42.811 [2024-05-16 07:36:36.208681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:42.811 [2024-05-16 07:36:36.208730] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5fca00 name Existed_Raid, state offline 00:23:42.811 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:42.811 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:42.811 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.811 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:23:43.068 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:23:43.069 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:23:43.069 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:23:43.069 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:23:43.069 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:43.069 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:43.326 BaseBdev2 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:43.327 07:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:43.585 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:43.843 [ 00:23:43.843 { 00:23:43.843 "name": "BaseBdev2", 00:23:43.843 "aliases": [ 00:23:43.843 "06b4e7a0-1357-11ef-8e8f-9dd684e56d79" 00:23:43.843 ], 00:23:43.844 "product_name": "Malloc disk", 00:23:43.844 "block_size": 512, 00:23:43.844 "num_blocks": 65536, 00:23:43.844 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:43.844 "assigned_rate_limits": { 00:23:43.844 "rw_ios_per_sec": 0, 00:23:43.844 "rw_mbytes_per_sec": 0, 00:23:43.844 "r_mbytes_per_sec": 0, 00:23:43.844 "w_mbytes_per_sec": 0 00:23:43.844 }, 00:23:43.844 "claimed": false, 00:23:43.844 "zoned": false, 00:23:43.844 "supported_io_types": { 00:23:43.844 "read": true, 00:23:43.844 "write": true, 00:23:43.844 "unmap": true, 00:23:43.844 "write_zeroes": true, 00:23:43.844 "flush": true, 00:23:43.844 "reset": true, 00:23:43.844 "compare": false, 00:23:43.844 "compare_and_write": false, 00:23:43.844 "abort": true, 00:23:43.844 "nvme_admin": false, 00:23:43.844 "nvme_io": false 00:23:43.844 }, 00:23:43.844 "memory_domains": [ 00:23:43.844 { 00:23:43.844 "dma_device_id": "system", 00:23:43.844 "dma_device_type": 1 00:23:43.844 }, 00:23:43.844 { 00:23:43.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.844 "dma_device_type": 2 00:23:43.844 } 00:23:43.844 ], 00:23:43.844 "driver_specific": {} 00:23:43.844 } 00:23:43.844 ] 00:23:43.844 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:43.844 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:23:43.844 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:43.844 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:44.103 BaseBdev3 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:44.103 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.361 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:44.620 [ 00:23:44.620 { 00:23:44.620 "name": "BaseBdev3", 00:23:44.620 "aliases": [ 00:23:44.620 "0725d2d3-1357-11ef-8e8f-9dd684e56d79" 00:23:44.620 ], 00:23:44.620 "product_name": "Malloc disk", 00:23:44.620 "block_size": 512, 00:23:44.620 "num_blocks": 65536, 00:23:44.620 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:44.620 "assigned_rate_limits": { 00:23:44.620 "rw_ios_per_sec": 0, 00:23:44.620 "rw_mbytes_per_sec": 0, 00:23:44.620 "r_mbytes_per_sec": 0, 00:23:44.620 "w_mbytes_per_sec": 0 00:23:44.620 }, 00:23:44.620 "claimed": false, 00:23:44.620 "zoned": false, 00:23:44.620 "supported_io_types": { 00:23:44.620 "read": true, 00:23:44.620 "write": true, 00:23:44.620 "unmap": true, 00:23:44.620 "write_zeroes": true, 00:23:44.620 "flush": true, 00:23:44.620 "reset": true, 00:23:44.620 "compare": false, 00:23:44.620 "compare_and_write": false, 00:23:44.620 "abort": true, 00:23:44.620 "nvme_admin": false, 00:23:44.620 "nvme_io": false 00:23:44.620 }, 00:23:44.620 "memory_domains": [ 00:23:44.620 { 00:23:44.620 "dma_device_id": "system", 00:23:44.620 "dma_device_type": 1 00:23:44.620 }, 00:23:44.620 { 00:23:44.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.620 "dma_device_type": 2 00:23:44.620 } 00:23:44.620 ], 00:23:44.620 "driver_specific": {} 00:23:44.620 } 00:23:44.620 ] 00:23:44.620 07:36:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:44.620 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:23:44.620 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:44.620 07:36:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:44.879 BaseBdev4 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.879 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:45.137 [ 00:23:45.138 { 00:23:45.138 "name": "BaseBdev4", 00:23:45.138 "aliases": [ 00:23:45.138 "0796beef-1357-11ef-8e8f-9dd684e56d79" 00:23:45.138 ], 00:23:45.138 "product_name": "Malloc disk", 00:23:45.138 "block_size": 512, 00:23:45.138 "num_blocks": 65536, 00:23:45.138 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:45.138 "assigned_rate_limits": { 00:23:45.138 "rw_ios_per_sec": 0, 00:23:45.138 "rw_mbytes_per_sec": 0, 00:23:45.138 "r_mbytes_per_sec": 0, 00:23:45.138 "w_mbytes_per_sec": 0 00:23:45.138 }, 00:23:45.138 "claimed": false, 00:23:45.138 "zoned": false, 00:23:45.138 "supported_io_types": { 00:23:45.138 "read": true, 00:23:45.138 "write": true, 00:23:45.138 "unmap": true, 00:23:45.138 "write_zeroes": true, 00:23:45.138 "flush": true, 00:23:45.138 "reset": true, 00:23:45.138 "compare": false, 00:23:45.138 "compare_and_write": false, 00:23:45.138 "abort": true, 00:23:45.138 "nvme_admin": false, 00:23:45.138 "nvme_io": false 00:23:45.138 }, 00:23:45.138 "memory_domains": [ 00:23:45.138 { 00:23:45.138 "dma_device_id": "system", 00:23:45.138 "dma_device_type": 1 00:23:45.138 }, 00:23:45.138 { 00:23:45.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.138 "dma_device_type": 2 00:23:45.138 } 00:23:45.138 ], 00:23:45.138 "driver_specific": {} 00:23:45.138 } 00:23:45.138 ] 00:23:45.138 07:36:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:45.138 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:23:45.138 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:23:45.138 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:45.394 [2024-05-16 07:36:38.893730] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:45.394 [2024-05-16 07:36:38.893781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:45.394 [2024-05-16 07:36:38.893789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.394 [2024-05-16 07:36:38.894200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:45.394 [2024-05-16 07:36:38.894215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:45.394 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:45.394 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:45.394 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:45.394 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.395 07:36:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.653 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.653 "name": "Existed_Raid", 00:23:45.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.653 "strip_size_kb": 64, 00:23:45.653 "state": "configuring", 00:23:45.653 "raid_level": "raid0", 00:23:45.653 "superblock": false, 00:23:45.653 "num_base_bdevs": 4, 00:23:45.653 "num_base_bdevs_discovered": 3, 00:23:45.653 "num_base_bdevs_operational": 4, 00:23:45.653 "base_bdevs_list": [ 00:23:45.653 { 00:23:45.653 "name": "BaseBdev1", 00:23:45.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.653 "is_configured": false, 00:23:45.653 "data_offset": 0, 00:23:45.653 "data_size": 0 00:23:45.653 }, 00:23:45.653 { 00:23:45.653 "name": "BaseBdev2", 00:23:45.653 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:45.653 "is_configured": true, 00:23:45.653 "data_offset": 0, 00:23:45.653 "data_size": 65536 00:23:45.653 }, 00:23:45.653 { 00:23:45.653 "name": "BaseBdev3", 00:23:45.653 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:45.653 "is_configured": true, 00:23:45.653 "data_offset": 0, 00:23:45.653 "data_size": 65536 00:23:45.653 }, 00:23:45.653 { 00:23:45.653 "name": "BaseBdev4", 00:23:45.653 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:45.653 "is_configured": true, 00:23:45.653 "data_offset": 0, 00:23:45.653 "data_size": 65536 00:23:45.653 } 00:23:45.653 ] 00:23:45.653 }' 00:23:45.653 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.653 07:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.912 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:46.171 [2024-05-16 07:36:39.705769] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.171 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.430 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:46.430 "name": "Existed_Raid", 00:23:46.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.430 "strip_size_kb": 64, 00:23:46.430 "state": "configuring", 00:23:46.430 "raid_level": "raid0", 00:23:46.430 "superblock": false, 00:23:46.430 "num_base_bdevs": 4, 00:23:46.430 "num_base_bdevs_discovered": 2, 00:23:46.430 "num_base_bdevs_operational": 4, 00:23:46.430 "base_bdevs_list": [ 00:23:46.430 { 00:23:46.430 "name": "BaseBdev1", 00:23:46.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.430 "is_configured": false, 00:23:46.430 "data_offset": 0, 00:23:46.430 "data_size": 0 00:23:46.430 }, 00:23:46.430 { 00:23:46.430 "name": null, 00:23:46.430 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:46.430 "is_configured": false, 00:23:46.430 "data_offset": 0, 00:23:46.430 "data_size": 65536 00:23:46.430 }, 00:23:46.430 { 00:23:46.430 "name": "BaseBdev3", 00:23:46.430 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:46.430 "is_configured": true, 00:23:46.430 "data_offset": 0, 00:23:46.430 "data_size": 65536 00:23:46.430 }, 00:23:46.430 { 00:23:46.430 "name": "BaseBdev4", 00:23:46.430 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:46.430 "is_configured": true, 00:23:46.430 "data_offset": 0, 00:23:46.430 "data_size": 65536 00:23:46.430 } 00:23:46.430 ] 00:23:46.430 }' 00:23:46.430 07:36:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:46.430 07:36:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.698 07:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:46.698 07:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.956 07:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:23:46.956 07:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:47.215 [2024-05-16 07:36:40.749919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.215 BaseBdev1 00:23:47.215 07:36:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:23:47.215 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:47.473 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:47.473 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:47.473 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:47.473 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:47.473 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:47.473 07:36:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:47.732 [ 00:23:47.732 { 00:23:47.732 "name": "BaseBdev1", 00:23:47.732 "aliases": [ 00:23:47.732 "09191dc1-1357-11ef-8e8f-9dd684e56d79" 00:23:47.732 ], 00:23:47.732 "product_name": "Malloc disk", 00:23:47.732 "block_size": 512, 00:23:47.732 "num_blocks": 65536, 00:23:47.732 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:47.732 "assigned_rate_limits": { 00:23:47.732 "rw_ios_per_sec": 0, 00:23:47.732 "rw_mbytes_per_sec": 0, 00:23:47.732 "r_mbytes_per_sec": 0, 00:23:47.733 "w_mbytes_per_sec": 0 00:23:47.733 }, 00:23:47.733 "claimed": true, 00:23:47.733 "claim_type": "exclusive_write", 00:23:47.733 "zoned": false, 00:23:47.733 "supported_io_types": { 00:23:47.733 "read": true, 00:23:47.733 "write": true, 00:23:47.733 "unmap": true, 00:23:47.733 "write_zeroes": true, 00:23:47.733 "flush": true, 00:23:47.733 "reset": true, 00:23:47.733 "compare": false, 00:23:47.733 "compare_and_write": false, 00:23:47.733 "abort": true, 00:23:47.733 "nvme_admin": false, 00:23:47.733 "nvme_io": false 00:23:47.733 }, 00:23:47.733 "memory_domains": [ 00:23:47.733 { 00:23:47.733 "dma_device_id": "system", 00:23:47.733 "dma_device_type": 1 00:23:47.733 }, 00:23:47.733 { 00:23:47.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.733 "dma_device_type": 2 00:23:47.733 } 00:23:47.733 ], 00:23:47.733 "driver_specific": {} 00:23:47.733 } 00:23:47.733 ] 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.733 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.992 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.992 "name": "Existed_Raid", 00:23:47.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.992 "strip_size_kb": 64, 00:23:47.992 "state": "configuring", 00:23:47.992 "raid_level": "raid0", 00:23:47.992 "superblock": false, 00:23:47.992 "num_base_bdevs": 4, 00:23:47.992 "num_base_bdevs_discovered": 3, 00:23:47.992 "num_base_bdevs_operational": 4, 00:23:47.992 "base_bdevs_list": [ 00:23:47.992 { 00:23:47.992 "name": "BaseBdev1", 00:23:47.992 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:47.992 "is_configured": true, 00:23:47.992 "data_offset": 0, 00:23:47.992 "data_size": 65536 00:23:47.992 }, 00:23:47.992 { 00:23:47.992 "name": null, 00:23:47.992 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:47.992 "is_configured": false, 00:23:47.992 "data_offset": 0, 00:23:47.992 "data_size": 65536 00:23:47.992 }, 00:23:47.992 { 00:23:47.992 "name": "BaseBdev3", 00:23:47.992 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:47.992 "is_configured": true, 00:23:47.992 "data_offset": 0, 00:23:47.992 "data_size": 65536 00:23:47.992 }, 00:23:47.992 { 00:23:47.992 "name": "BaseBdev4", 00:23:47.992 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:47.992 "is_configured": true, 00:23:47.992 "data_offset": 0, 00:23:47.992 "data_size": 65536 00:23:47.992 } 00:23:47.992 ] 00:23:47.992 }' 00:23:47.992 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.992 07:36:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.562 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.562 07:36:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:48.821 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:48.821 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:48.821 [2024-05-16 07:36:42.365849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.081 "name": "Existed_Raid", 00:23:49.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.081 "strip_size_kb": 64, 00:23:49.081 "state": "configuring", 00:23:49.081 "raid_level": "raid0", 00:23:49.081 "superblock": false, 00:23:49.081 "num_base_bdevs": 4, 00:23:49.081 "num_base_bdevs_discovered": 2, 00:23:49.081 "num_base_bdevs_operational": 4, 00:23:49.081 "base_bdevs_list": [ 00:23:49.081 { 00:23:49.081 "name": "BaseBdev1", 00:23:49.081 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:49.081 "is_configured": true, 00:23:49.081 "data_offset": 0, 00:23:49.081 "data_size": 65536 00:23:49.081 }, 00:23:49.081 { 00:23:49.081 "name": null, 00:23:49.081 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:49.081 "is_configured": false, 00:23:49.081 "data_offset": 0, 00:23:49.081 "data_size": 65536 00:23:49.081 }, 00:23:49.081 { 00:23:49.081 "name": null, 00:23:49.081 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:49.081 "is_configured": false, 00:23:49.081 "data_offset": 0, 00:23:49.081 "data_size": 65536 00:23:49.081 }, 00:23:49.081 { 00:23:49.081 "name": "BaseBdev4", 00:23:49.081 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:49.081 "is_configured": true, 00:23:49.081 "data_offset": 0, 00:23:49.081 "data_size": 65536 00:23:49.081 } 00:23:49.081 ] 00:23:49.081 }' 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.081 07:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.357 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:49.357 07:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.615 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:23:49.615 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:50.183 [2024-05-16 07:36:43.433880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.183 "name": "Existed_Raid", 00:23:50.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.183 "strip_size_kb": 64, 00:23:50.183 "state": "configuring", 00:23:50.183 "raid_level": "raid0", 00:23:50.183 "superblock": false, 00:23:50.183 "num_base_bdevs": 4, 00:23:50.183 "num_base_bdevs_discovered": 3, 00:23:50.183 "num_base_bdevs_operational": 4, 00:23:50.183 "base_bdevs_list": [ 00:23:50.183 { 00:23:50.183 "name": "BaseBdev1", 00:23:50.183 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:50.183 "is_configured": true, 00:23:50.183 "data_offset": 0, 00:23:50.183 "data_size": 65536 00:23:50.183 }, 00:23:50.183 { 00:23:50.183 "name": null, 00:23:50.183 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:50.183 "is_configured": false, 00:23:50.183 "data_offset": 0, 00:23:50.183 "data_size": 65536 00:23:50.183 }, 00:23:50.183 { 00:23:50.183 "name": "BaseBdev3", 00:23:50.183 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:50.183 "is_configured": true, 00:23:50.183 "data_offset": 0, 00:23:50.183 "data_size": 65536 00:23:50.183 }, 00:23:50.183 { 00:23:50.183 "name": "BaseBdev4", 00:23:50.183 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:50.183 "is_configured": true, 00:23:50.183 "data_offset": 0, 00:23:50.183 "data_size": 65536 00:23:50.183 } 00:23:50.183 ] 00:23:50.183 }' 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.183 07:36:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.748 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.748 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:50.748 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:23:50.748 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:51.006 [2024-05-16 07:36:44.449904] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.006 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.265 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:51.265 "name": "Existed_Raid", 00:23:51.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.265 "strip_size_kb": 64, 00:23:51.265 "state": "configuring", 00:23:51.265 "raid_level": "raid0", 00:23:51.265 "superblock": false, 00:23:51.265 "num_base_bdevs": 4, 00:23:51.265 "num_base_bdevs_discovered": 2, 00:23:51.265 "num_base_bdevs_operational": 4, 00:23:51.265 "base_bdevs_list": [ 00:23:51.265 { 00:23:51.265 "name": null, 00:23:51.265 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:51.265 "is_configured": false, 00:23:51.265 "data_offset": 0, 00:23:51.265 "data_size": 65536 00:23:51.265 }, 00:23:51.265 { 00:23:51.265 "name": null, 00:23:51.265 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:51.265 "is_configured": false, 00:23:51.265 "data_offset": 0, 00:23:51.265 "data_size": 65536 00:23:51.265 }, 00:23:51.265 { 00:23:51.265 "name": "BaseBdev3", 00:23:51.265 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:51.265 "is_configured": true, 00:23:51.265 "data_offset": 0, 00:23:51.265 "data_size": 65536 00:23:51.265 }, 00:23:51.265 { 00:23:51.265 "name": "BaseBdev4", 00:23:51.265 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:51.265 "is_configured": true, 00:23:51.265 "data_offset": 0, 00:23:51.265 "data_size": 65536 00:23:51.265 } 00:23:51.265 ] 00:23:51.265 }' 00:23:51.265 07:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:51.265 07:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.524 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.524 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:51.782 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:23:51.782 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:52.105 [2024-05-16 07:36:45.438666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.105 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.364 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.364 "name": "Existed_Raid", 00:23:52.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.364 "strip_size_kb": 64, 00:23:52.364 "state": "configuring", 00:23:52.364 "raid_level": "raid0", 00:23:52.364 "superblock": false, 00:23:52.364 "num_base_bdevs": 4, 00:23:52.364 "num_base_bdevs_discovered": 3, 00:23:52.364 "num_base_bdevs_operational": 4, 00:23:52.364 "base_bdevs_list": [ 00:23:52.364 { 00:23:52.364 "name": null, 00:23:52.364 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:52.364 "is_configured": false, 00:23:52.364 "data_offset": 0, 00:23:52.364 "data_size": 65536 00:23:52.364 }, 00:23:52.364 { 00:23:52.364 "name": "BaseBdev2", 00:23:52.364 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:52.364 "is_configured": true, 00:23:52.364 "data_offset": 0, 00:23:52.364 "data_size": 65536 00:23:52.364 }, 00:23:52.364 { 00:23:52.364 "name": "BaseBdev3", 00:23:52.364 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:52.364 "is_configured": true, 00:23:52.364 "data_offset": 0, 00:23:52.364 "data_size": 65536 00:23:52.364 }, 00:23:52.364 { 00:23:52.364 "name": "BaseBdev4", 00:23:52.364 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:52.364 "is_configured": true, 00:23:52.364 "data_offset": 0, 00:23:52.364 "data_size": 65536 00:23:52.364 } 00:23:52.364 ] 00:23:52.364 }' 00:23:52.364 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.364 07:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.364 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.364 07:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:52.623 07:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:23:52.623 07:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.623 07:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:52.881 07:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 09191dc1-1357-11ef-8e8f-9dd684e56d79 00:23:53.140 [2024-05-16 07:36:46.526776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:53.140 [2024-05-16 07:36:46.526798] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5fcf00 00:23:53.140 [2024-05-16 07:36:46.526802] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:53.140 [2024-05-16 07:36:46.526820] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b65fe20 00:23:53.140 [2024-05-16 07:36:46.526872] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5fcf00 00:23:53.140 [2024-05-16 07:36:46.526875] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5fcf00 00:23:53.140 [2024-05-16 07:36:46.526899] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.140 NewBaseBdev 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:53.140 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:53.399 07:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:53.657 [ 00:23:53.657 { 00:23:53.657 "name": "NewBaseBdev", 00:23:53.657 "aliases": [ 00:23:53.657 "09191dc1-1357-11ef-8e8f-9dd684e56d79" 00:23:53.657 ], 00:23:53.657 "product_name": "Malloc disk", 00:23:53.657 "block_size": 512, 00:23:53.657 "num_blocks": 65536, 00:23:53.657 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:53.657 "assigned_rate_limits": { 00:23:53.657 "rw_ios_per_sec": 0, 00:23:53.657 "rw_mbytes_per_sec": 0, 00:23:53.657 "r_mbytes_per_sec": 0, 00:23:53.657 "w_mbytes_per_sec": 0 00:23:53.657 }, 00:23:53.657 "claimed": true, 00:23:53.657 "claim_type": "exclusive_write", 00:23:53.657 "zoned": false, 00:23:53.657 "supported_io_types": { 00:23:53.657 "read": true, 00:23:53.657 "write": true, 00:23:53.657 "unmap": true, 00:23:53.657 "write_zeroes": true, 00:23:53.657 "flush": true, 00:23:53.657 "reset": true, 00:23:53.657 "compare": false, 00:23:53.657 "compare_and_write": false, 00:23:53.657 "abort": true, 00:23:53.657 "nvme_admin": false, 00:23:53.657 "nvme_io": false 00:23:53.657 }, 00:23:53.657 "memory_domains": [ 00:23:53.657 { 00:23:53.657 "dma_device_id": "system", 00:23:53.657 "dma_device_type": 1 00:23:53.657 }, 00:23:53.657 { 00:23:53.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.657 "dma_device_type": 2 00:23:53.657 } 00:23:53.657 ], 00:23:53.657 "driver_specific": {} 00:23:53.657 } 00:23:53.657 ] 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.657 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.916 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:53.916 "name": "Existed_Raid", 00:23:53.916 "uuid": "0c8a9d67-1357-11ef-8e8f-9dd684e56d79", 00:23:53.916 "strip_size_kb": 64, 00:23:53.916 "state": "online", 00:23:53.916 "raid_level": "raid0", 00:23:53.916 "superblock": false, 00:23:53.916 "num_base_bdevs": 4, 00:23:53.916 "num_base_bdevs_discovered": 4, 00:23:53.916 "num_base_bdevs_operational": 4, 00:23:53.916 "base_bdevs_list": [ 00:23:53.916 { 00:23:53.916 "name": "NewBaseBdev", 00:23:53.916 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:53.916 "is_configured": true, 00:23:53.916 "data_offset": 0, 00:23:53.916 "data_size": 65536 00:23:53.916 }, 00:23:53.916 { 00:23:53.916 "name": "BaseBdev2", 00:23:53.916 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:53.916 "is_configured": true, 00:23:53.916 "data_offset": 0, 00:23:53.916 "data_size": 65536 00:23:53.916 }, 00:23:53.916 { 00:23:53.916 "name": "BaseBdev3", 00:23:53.916 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:53.916 "is_configured": true, 00:23:53.916 "data_offset": 0, 00:23:53.916 "data_size": 65536 00:23:53.916 }, 00:23:53.916 { 00:23:53.916 "name": "BaseBdev4", 00:23:53.916 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:53.916 "is_configured": true, 00:23:53.916 "data_offset": 0, 00:23:53.916 "data_size": 65536 00:23:53.916 } 00:23:53.916 ] 00:23:53.916 }' 00:23:53.916 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:53.916 07:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:23:54.175 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:54.434 [2024-05-16 07:36:47.878741] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.434 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:23:54.434 "name": "Existed_Raid", 00:23:54.434 "aliases": [ 00:23:54.434 "0c8a9d67-1357-11ef-8e8f-9dd684e56d79" 00:23:54.434 ], 00:23:54.434 "product_name": "Raid Volume", 00:23:54.434 "block_size": 512, 00:23:54.434 "num_blocks": 262144, 00:23:54.434 "uuid": "0c8a9d67-1357-11ef-8e8f-9dd684e56d79", 00:23:54.434 "assigned_rate_limits": { 00:23:54.434 "rw_ios_per_sec": 0, 00:23:54.434 "rw_mbytes_per_sec": 0, 00:23:54.434 "r_mbytes_per_sec": 0, 00:23:54.434 "w_mbytes_per_sec": 0 00:23:54.435 }, 00:23:54.435 "claimed": false, 00:23:54.435 "zoned": false, 00:23:54.435 "supported_io_types": { 00:23:54.435 "read": true, 00:23:54.435 "write": true, 00:23:54.435 "unmap": true, 00:23:54.435 "write_zeroes": true, 00:23:54.435 "flush": true, 00:23:54.435 "reset": true, 00:23:54.435 "compare": false, 00:23:54.435 "compare_and_write": false, 00:23:54.435 "abort": false, 00:23:54.435 "nvme_admin": false, 00:23:54.435 "nvme_io": false 00:23:54.435 }, 00:23:54.435 "memory_domains": [ 00:23:54.435 { 00:23:54.435 "dma_device_id": "system", 00:23:54.435 "dma_device_type": 1 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.435 "dma_device_type": 2 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "system", 00:23:54.435 "dma_device_type": 1 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.435 "dma_device_type": 2 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "system", 00:23:54.435 "dma_device_type": 1 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.435 "dma_device_type": 2 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "system", 00:23:54.435 "dma_device_type": 1 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.435 "dma_device_type": 2 00:23:54.435 } 00:23:54.435 ], 00:23:54.435 "driver_specific": { 00:23:54.435 "raid": { 00:23:54.435 "uuid": "0c8a9d67-1357-11ef-8e8f-9dd684e56d79", 00:23:54.435 "strip_size_kb": 64, 00:23:54.435 "state": "online", 00:23:54.435 "raid_level": "raid0", 00:23:54.435 "superblock": false, 00:23:54.435 "num_base_bdevs": 4, 00:23:54.435 "num_base_bdevs_discovered": 4, 00:23:54.435 "num_base_bdevs_operational": 4, 00:23:54.435 "base_bdevs_list": [ 00:23:54.435 { 00:23:54.435 "name": "NewBaseBdev", 00:23:54.435 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:54.435 "is_configured": true, 00:23:54.435 "data_offset": 0, 00:23:54.435 "data_size": 65536 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "name": "BaseBdev2", 00:23:54.435 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:54.435 "is_configured": true, 00:23:54.435 "data_offset": 0, 00:23:54.435 "data_size": 65536 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "name": "BaseBdev3", 00:23:54.435 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:54.435 "is_configured": true, 00:23:54.435 "data_offset": 0, 00:23:54.435 "data_size": 65536 00:23:54.435 }, 00:23:54.435 { 00:23:54.435 "name": "BaseBdev4", 00:23:54.435 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:54.435 "is_configured": true, 00:23:54.435 "data_offset": 0, 00:23:54.435 "data_size": 65536 00:23:54.435 } 00:23:54.435 ] 00:23:54.435 } 00:23:54.435 } 00:23:54.435 }' 00:23:54.435 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:54.435 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:23:54.435 BaseBdev2 00:23:54.435 BaseBdev3 00:23:54.435 BaseBdev4' 00:23:54.435 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:54.435 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:54.435 07:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:54.694 "name": "NewBaseBdev", 00:23:54.694 "aliases": [ 00:23:54.694 "09191dc1-1357-11ef-8e8f-9dd684e56d79" 00:23:54.694 ], 00:23:54.694 "product_name": "Malloc disk", 00:23:54.694 "block_size": 512, 00:23:54.694 "num_blocks": 65536, 00:23:54.694 "uuid": "09191dc1-1357-11ef-8e8f-9dd684e56d79", 00:23:54.694 "assigned_rate_limits": { 00:23:54.694 "rw_ios_per_sec": 0, 00:23:54.694 "rw_mbytes_per_sec": 0, 00:23:54.694 "r_mbytes_per_sec": 0, 00:23:54.694 "w_mbytes_per_sec": 0 00:23:54.694 }, 00:23:54.694 "claimed": true, 00:23:54.694 "claim_type": "exclusive_write", 00:23:54.694 "zoned": false, 00:23:54.694 "supported_io_types": { 00:23:54.694 "read": true, 00:23:54.694 "write": true, 00:23:54.694 "unmap": true, 00:23:54.694 "write_zeroes": true, 00:23:54.694 "flush": true, 00:23:54.694 "reset": true, 00:23:54.694 "compare": false, 00:23:54.694 "compare_and_write": false, 00:23:54.694 "abort": true, 00:23:54.694 "nvme_admin": false, 00:23:54.694 "nvme_io": false 00:23:54.694 }, 00:23:54.694 "memory_domains": [ 00:23:54.694 { 00:23:54.694 "dma_device_id": "system", 00:23:54.694 "dma_device_type": 1 00:23:54.694 }, 00:23:54.694 { 00:23:54.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.694 "dma_device_type": 2 00:23:54.694 } 00:23:54.694 ], 00:23:54.694 "driver_specific": {} 00:23:54.694 }' 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:54.694 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:54.953 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:55.212 "name": "BaseBdev2", 00:23:55.212 "aliases": [ 00:23:55.212 "06b4e7a0-1357-11ef-8e8f-9dd684e56d79" 00:23:55.212 ], 00:23:55.212 "product_name": "Malloc disk", 00:23:55.212 "block_size": 512, 00:23:55.212 "num_blocks": 65536, 00:23:55.212 "uuid": "06b4e7a0-1357-11ef-8e8f-9dd684e56d79", 00:23:55.212 "assigned_rate_limits": { 00:23:55.212 "rw_ios_per_sec": 0, 00:23:55.212 "rw_mbytes_per_sec": 0, 00:23:55.212 "r_mbytes_per_sec": 0, 00:23:55.212 "w_mbytes_per_sec": 0 00:23:55.212 }, 00:23:55.212 "claimed": true, 00:23:55.212 "claim_type": "exclusive_write", 00:23:55.212 "zoned": false, 00:23:55.212 "supported_io_types": { 00:23:55.212 "read": true, 00:23:55.212 "write": true, 00:23:55.212 "unmap": true, 00:23:55.212 "write_zeroes": true, 00:23:55.212 "flush": true, 00:23:55.212 "reset": true, 00:23:55.212 "compare": false, 00:23:55.212 "compare_and_write": false, 00:23:55.212 "abort": true, 00:23:55.212 "nvme_admin": false, 00:23:55.212 "nvme_io": false 00:23:55.212 }, 00:23:55.212 "memory_domains": [ 00:23:55.212 { 00:23:55.212 "dma_device_id": "system", 00:23:55.212 "dma_device_type": 1 00:23:55.212 }, 00:23:55.212 { 00:23:55.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.212 "dma_device_type": 2 00:23:55.212 } 00:23:55.212 ], 00:23:55.212 "driver_specific": {} 00:23:55.212 }' 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:55.212 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:55.471 "name": "BaseBdev3", 00:23:55.471 "aliases": [ 00:23:55.471 "0725d2d3-1357-11ef-8e8f-9dd684e56d79" 00:23:55.471 ], 00:23:55.471 "product_name": "Malloc disk", 00:23:55.471 "block_size": 512, 00:23:55.471 "num_blocks": 65536, 00:23:55.471 "uuid": "0725d2d3-1357-11ef-8e8f-9dd684e56d79", 00:23:55.471 "assigned_rate_limits": { 00:23:55.471 "rw_ios_per_sec": 0, 00:23:55.471 "rw_mbytes_per_sec": 0, 00:23:55.471 "r_mbytes_per_sec": 0, 00:23:55.471 "w_mbytes_per_sec": 0 00:23:55.471 }, 00:23:55.471 "claimed": true, 00:23:55.471 "claim_type": "exclusive_write", 00:23:55.471 "zoned": false, 00:23:55.471 "supported_io_types": { 00:23:55.471 "read": true, 00:23:55.471 "write": true, 00:23:55.471 "unmap": true, 00:23:55.471 "write_zeroes": true, 00:23:55.471 "flush": true, 00:23:55.471 "reset": true, 00:23:55.471 "compare": false, 00:23:55.471 "compare_and_write": false, 00:23:55.471 "abort": true, 00:23:55.471 "nvme_admin": false, 00:23:55.471 "nvme_io": false 00:23:55.471 }, 00:23:55.471 "memory_domains": [ 00:23:55.471 { 00:23:55.471 "dma_device_id": "system", 00:23:55.471 "dma_device_type": 1 00:23:55.471 }, 00:23:55.471 { 00:23:55.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.471 "dma_device_type": 2 00:23:55.471 } 00:23:55.471 ], 00:23:55.471 "driver_specific": {} 00:23:55.471 }' 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:23:55.471 07:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:23:55.730 "name": "BaseBdev4", 00:23:55.730 "aliases": [ 00:23:55.730 "0796beef-1357-11ef-8e8f-9dd684e56d79" 00:23:55.730 ], 00:23:55.730 "product_name": "Malloc disk", 00:23:55.730 "block_size": 512, 00:23:55.730 "num_blocks": 65536, 00:23:55.730 "uuid": "0796beef-1357-11ef-8e8f-9dd684e56d79", 00:23:55.730 "assigned_rate_limits": { 00:23:55.730 "rw_ios_per_sec": 0, 00:23:55.730 "rw_mbytes_per_sec": 0, 00:23:55.730 "r_mbytes_per_sec": 0, 00:23:55.730 "w_mbytes_per_sec": 0 00:23:55.730 }, 00:23:55.730 "claimed": true, 00:23:55.730 "claim_type": "exclusive_write", 00:23:55.730 "zoned": false, 00:23:55.730 "supported_io_types": { 00:23:55.730 "read": true, 00:23:55.730 "write": true, 00:23:55.730 "unmap": true, 00:23:55.730 "write_zeroes": true, 00:23:55.730 "flush": true, 00:23:55.730 "reset": true, 00:23:55.730 "compare": false, 00:23:55.730 "compare_and_write": false, 00:23:55.730 "abort": true, 00:23:55.730 "nvme_admin": false, 00:23:55.730 "nvme_io": false 00:23:55.730 }, 00:23:55.730 "memory_domains": [ 00:23:55.730 { 00:23:55.730 "dma_device_id": "system", 00:23:55.730 "dma_device_type": 1 00:23:55.730 }, 00:23:55.730 { 00:23:55.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.730 "dma_device_type": 2 00:23:55.730 } 00:23:55.730 ], 00:23:55.730 "driver_specific": {} 00:23:55.730 }' 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:55.730 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:23:55.731 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:55.731 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:55.731 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:23:55.731 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:23:55.731 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:55.989 [2024-05-16 07:36:49.434731] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:55.989 [2024-05-16 07:36:49.434751] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.989 [2024-05-16 07:36:49.434764] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.990 [2024-05-16 07:36:49.434775] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.990 [2024-05-16 07:36:49.434779] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5fcf00 name Existed_Raid, state offline 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 57642 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 57642 ']' 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 57642 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 57642 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:23:55.990 killing process with pid 57642 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 57642' 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 57642 00:23:55.990 [2024-05-16 07:36:49.466035] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.990 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 57642 00:23:55.990 [2024-05-16 07:36:49.485218] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:23:56.248 00:23:56.248 real 0m26.451s 00:23:56.248 user 0m48.316s 00:23:56.248 sys 0m3.810s 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:56.248 ************************************ 00:23:56.248 END TEST raid_state_function_test 00:23:56.248 ************************************ 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.248 07:36:49 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:23:56.248 07:36:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:56.248 07:36:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:56.248 07:36:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.248 ************************************ 00:23:56.248 START TEST raid_state_function_test_sb 00:23:56.248 ************************************ 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid0 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:23:56.248 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid0 '!=' raid1 ']' 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=58457 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 58457' 00:23:56.249 Process raid pid: 58457 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 58457 /var/tmp/spdk-raid.sock 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 58457 ']' 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:56.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.249 07:36:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.249 [2024-05-16 07:36:49.713678] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:23:56.249 [2024-05-16 07:36:49.713957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:23:56.815 EAL: TSC is not safe to use in SMP mode 00:23:56.815 EAL: TSC is not invariant 00:23:56.815 [2024-05-16 07:36:50.156790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.815 [2024-05-16 07:36:50.254793] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:23:56.815 [2024-05-16 07:36:50.257385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.815 [2024-05-16 07:36:50.258272] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.815 [2024-05-16 07:36:50.258287] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.381 07:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.381 07:36:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:23:57.381 07:36:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:57.641 [2024-05-16 07:36:51.038389] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:57.641 [2024-05-16 07:36:51.038445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:57.641 [2024-05-16 07:36:51.038450] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:57.641 [2024-05-16 07:36:51.038459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:57.641 [2024-05-16 07:36:51.038462] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:57.641 [2024-05-16 07:36:51.038470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:57.641 [2024-05-16 07:36:51.038473] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:57.641 [2024-05-16 07:36:51.038480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.641 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.900 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.900 "name": "Existed_Raid", 00:23:57.900 "uuid": "0f3b06cd-1357-11ef-8e8f-9dd684e56d79", 00:23:57.900 "strip_size_kb": 64, 00:23:57.900 "state": "configuring", 00:23:57.900 "raid_level": "raid0", 00:23:57.900 "superblock": true, 00:23:57.900 "num_base_bdevs": 4, 00:23:57.900 "num_base_bdevs_discovered": 0, 00:23:57.900 "num_base_bdevs_operational": 4, 00:23:57.900 "base_bdevs_list": [ 00:23:57.900 { 00:23:57.900 "name": "BaseBdev1", 00:23:57.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.900 "is_configured": false, 00:23:57.900 "data_offset": 0, 00:23:57.900 "data_size": 0 00:23:57.900 }, 00:23:57.900 { 00:23:57.900 "name": "BaseBdev2", 00:23:57.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.900 "is_configured": false, 00:23:57.900 "data_offset": 0, 00:23:57.900 "data_size": 0 00:23:57.900 }, 00:23:57.900 { 00:23:57.900 "name": "BaseBdev3", 00:23:57.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.900 "is_configured": false, 00:23:57.900 "data_offset": 0, 00:23:57.900 "data_size": 0 00:23:57.900 }, 00:23:57.900 { 00:23:57.900 "name": "BaseBdev4", 00:23:57.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.900 "is_configured": false, 00:23:57.900 "data_offset": 0, 00:23:57.900 "data_size": 0 00:23:57.900 } 00:23:57.900 ] 00:23:57.900 }' 00:23:57.900 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.900 07:36:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:58.159 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:58.419 [2024-05-16 07:36:51.974375] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:58.419 [2024-05-16 07:36:51.974402] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e9a3500 name Existed_Raid, state configuring 00:23:58.677 07:36:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:58.936 [2024-05-16 07:36:52.238413] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:58.936 [2024-05-16 07:36:52.238469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:58.936 [2024-05-16 07:36:52.238474] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:58.936 [2024-05-16 07:36:52.238483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:58.936 [2024-05-16 07:36:52.238486] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:58.936 [2024-05-16 07:36:52.238494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:58.936 [2024-05-16 07:36:52.238497] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:58.936 [2024-05-16 07:36:52.238515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:58.936 07:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:59.194 [2024-05-16 07:36:52.507262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:59.194 BaseBdev1 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:59.194 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:59.453 07:36:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:59.453 [ 00:23:59.453 { 00:23:59.453 "name": "BaseBdev1", 00:23:59.453 "aliases": [ 00:23:59.453 "101b07eb-1357-11ef-8e8f-9dd684e56d79" 00:23:59.453 ], 00:23:59.453 "product_name": "Malloc disk", 00:23:59.453 "block_size": 512, 00:23:59.453 "num_blocks": 65536, 00:23:59.453 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:23:59.453 "assigned_rate_limits": { 00:23:59.453 "rw_ios_per_sec": 0, 00:23:59.453 "rw_mbytes_per_sec": 0, 00:23:59.453 "r_mbytes_per_sec": 0, 00:23:59.453 "w_mbytes_per_sec": 0 00:23:59.453 }, 00:23:59.453 "claimed": true, 00:23:59.453 "claim_type": "exclusive_write", 00:23:59.453 "zoned": false, 00:23:59.453 "supported_io_types": { 00:23:59.453 "read": true, 00:23:59.453 "write": true, 00:23:59.453 "unmap": true, 00:23:59.453 "write_zeroes": true, 00:23:59.453 "flush": true, 00:23:59.453 "reset": true, 00:23:59.453 "compare": false, 00:23:59.453 "compare_and_write": false, 00:23:59.453 "abort": true, 00:23:59.453 "nvme_admin": false, 00:23:59.453 "nvme_io": false 00:23:59.453 }, 00:23:59.453 "memory_domains": [ 00:23:59.453 { 00:23:59.453 "dma_device_id": "system", 00:23:59.453 "dma_device_type": 1 00:23:59.453 }, 00:23:59.453 { 00:23:59.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.453 "dma_device_type": 2 00:23:59.453 } 00:23:59.453 ], 00:23:59.453 "driver_specific": {} 00:23:59.453 } 00:23:59.453 ] 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.453 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.711 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:59.711 "name": "Existed_Raid", 00:23:59.711 "uuid": "0ff222ca-1357-11ef-8e8f-9dd684e56d79", 00:23:59.711 "strip_size_kb": 64, 00:23:59.711 "state": "configuring", 00:23:59.711 "raid_level": "raid0", 00:23:59.711 "superblock": true, 00:23:59.711 "num_base_bdevs": 4, 00:23:59.711 "num_base_bdevs_discovered": 1, 00:23:59.711 "num_base_bdevs_operational": 4, 00:23:59.711 "base_bdevs_list": [ 00:23:59.711 { 00:23:59.711 "name": "BaseBdev1", 00:23:59.711 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:23:59.711 "is_configured": true, 00:23:59.711 "data_offset": 2048, 00:23:59.711 "data_size": 63488 00:23:59.711 }, 00:23:59.711 { 00:23:59.711 "name": "BaseBdev2", 00:23:59.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.711 "is_configured": false, 00:23:59.711 "data_offset": 0, 00:23:59.711 "data_size": 0 00:23:59.711 }, 00:23:59.711 { 00:23:59.711 "name": "BaseBdev3", 00:23:59.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.711 "is_configured": false, 00:23:59.711 "data_offset": 0, 00:23:59.711 "data_size": 0 00:23:59.711 }, 00:23:59.711 { 00:23:59.711 "name": "BaseBdev4", 00:23:59.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.711 "is_configured": false, 00:23:59.711 "data_offset": 0, 00:23:59.711 "data_size": 0 00:23:59.711 } 00:23:59.711 ] 00:23:59.711 }' 00:23:59.711 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:59.711 07:36:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.972 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:00.237 [2024-05-16 07:36:53.722420] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:00.237 [2024-05-16 07:36:53.722449] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e9a3500 name Existed_Raid, state configuring 00:24:00.237 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:00.494 [2024-05-16 07:36:53.930433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:00.494 [2024-05-16 07:36:53.931104] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:00.494 [2024-05-16 07:36:53.931146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:00.494 [2024-05-16 07:36:53.931150] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:00.494 [2024-05-16 07:36:53.931158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:00.494 [2024-05-16 07:36:53.931161] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:00.494 [2024-05-16 07:36:53.931168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.494 07:36:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.752 07:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.752 "name": "Existed_Raid", 00:24:00.752 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:00.752 "strip_size_kb": 64, 00:24:00.752 "state": "configuring", 00:24:00.752 "raid_level": "raid0", 00:24:00.752 "superblock": true, 00:24:00.752 "num_base_bdevs": 4, 00:24:00.752 "num_base_bdevs_discovered": 1, 00:24:00.752 "num_base_bdevs_operational": 4, 00:24:00.752 "base_bdevs_list": [ 00:24:00.752 { 00:24:00.752 "name": "BaseBdev1", 00:24:00.752 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:24:00.752 "is_configured": true, 00:24:00.752 "data_offset": 2048, 00:24:00.752 "data_size": 63488 00:24:00.752 }, 00:24:00.752 { 00:24:00.752 "name": "BaseBdev2", 00:24:00.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.752 "is_configured": false, 00:24:00.752 "data_offset": 0, 00:24:00.752 "data_size": 0 00:24:00.752 }, 00:24:00.752 { 00:24:00.752 "name": "BaseBdev3", 00:24:00.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.752 "is_configured": false, 00:24:00.752 "data_offset": 0, 00:24:00.752 "data_size": 0 00:24:00.752 }, 00:24:00.752 { 00:24:00.752 "name": "BaseBdev4", 00:24:00.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.752 "is_configured": false, 00:24:00.752 "data_offset": 0, 00:24:00.752 "data_size": 0 00:24:00.752 } 00:24:00.752 ] 00:24:00.752 }' 00:24:00.753 07:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.753 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:01.011 07:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:01.268 [2024-05-16 07:36:54.806549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:01.268 BaseBdev2 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:01.268 07:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:01.524 07:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:01.780 [ 00:24:01.780 { 00:24:01.780 "name": "BaseBdev2", 00:24:01.780 "aliases": [ 00:24:01.780 "1179fcaa-1357-11ef-8e8f-9dd684e56d79" 00:24:01.780 ], 00:24:01.780 "product_name": "Malloc disk", 00:24:01.780 "block_size": 512, 00:24:01.780 "num_blocks": 65536, 00:24:01.780 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:01.780 "assigned_rate_limits": { 00:24:01.780 "rw_ios_per_sec": 0, 00:24:01.780 "rw_mbytes_per_sec": 0, 00:24:01.780 "r_mbytes_per_sec": 0, 00:24:01.780 "w_mbytes_per_sec": 0 00:24:01.780 }, 00:24:01.780 "claimed": true, 00:24:01.780 "claim_type": "exclusive_write", 00:24:01.780 "zoned": false, 00:24:01.780 "supported_io_types": { 00:24:01.780 "read": true, 00:24:01.780 "write": true, 00:24:01.780 "unmap": true, 00:24:01.780 "write_zeroes": true, 00:24:01.780 "flush": true, 00:24:01.780 "reset": true, 00:24:01.780 "compare": false, 00:24:01.780 "compare_and_write": false, 00:24:01.780 "abort": true, 00:24:01.780 "nvme_admin": false, 00:24:01.780 "nvme_io": false 00:24:01.780 }, 00:24:01.780 "memory_domains": [ 00:24:01.780 { 00:24:01.780 "dma_device_id": "system", 00:24:01.780 "dma_device_type": 1 00:24:01.780 }, 00:24:01.780 { 00:24:01.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.780 "dma_device_type": 2 00:24:01.780 } 00:24:01.780 ], 00:24:01.780 "driver_specific": {} 00:24:01.780 } 00:24:01.780 ] 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.780 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.037 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:02.037 "name": "Existed_Raid", 00:24:02.037 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:02.037 "strip_size_kb": 64, 00:24:02.037 "state": "configuring", 00:24:02.037 "raid_level": "raid0", 00:24:02.037 "superblock": true, 00:24:02.037 "num_base_bdevs": 4, 00:24:02.037 "num_base_bdevs_discovered": 2, 00:24:02.037 "num_base_bdevs_operational": 4, 00:24:02.037 "base_bdevs_list": [ 00:24:02.037 { 00:24:02.037 "name": "BaseBdev1", 00:24:02.037 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:24:02.037 "is_configured": true, 00:24:02.037 "data_offset": 2048, 00:24:02.037 "data_size": 63488 00:24:02.037 }, 00:24:02.037 { 00:24:02.037 "name": "BaseBdev2", 00:24:02.037 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:02.037 "is_configured": true, 00:24:02.037 "data_offset": 2048, 00:24:02.037 "data_size": 63488 00:24:02.037 }, 00:24:02.037 { 00:24:02.037 "name": "BaseBdev3", 00:24:02.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.037 "is_configured": false, 00:24:02.037 "data_offset": 0, 00:24:02.037 "data_size": 0 00:24:02.037 }, 00:24:02.037 { 00:24:02.037 "name": "BaseBdev4", 00:24:02.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.037 "is_configured": false, 00:24:02.037 "data_offset": 0, 00:24:02.037 "data_size": 0 00:24:02.037 } 00:24:02.037 ] 00:24:02.037 }' 00:24:02.037 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:02.037 07:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.347 07:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:02.619 [2024-05-16 07:36:56.074547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:02.619 BaseBdev3 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:02.619 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.877 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:03.135 [ 00:24:03.135 { 00:24:03.135 "name": "BaseBdev3", 00:24:03.135 "aliases": [ 00:24:03.135 "123b78da-1357-11ef-8e8f-9dd684e56d79" 00:24:03.135 ], 00:24:03.135 "product_name": "Malloc disk", 00:24:03.135 "block_size": 512, 00:24:03.135 "num_blocks": 65536, 00:24:03.135 "uuid": "123b78da-1357-11ef-8e8f-9dd684e56d79", 00:24:03.135 "assigned_rate_limits": { 00:24:03.135 "rw_ios_per_sec": 0, 00:24:03.135 "rw_mbytes_per_sec": 0, 00:24:03.135 "r_mbytes_per_sec": 0, 00:24:03.135 "w_mbytes_per_sec": 0 00:24:03.135 }, 00:24:03.135 "claimed": true, 00:24:03.135 "claim_type": "exclusive_write", 00:24:03.135 "zoned": false, 00:24:03.135 "supported_io_types": { 00:24:03.135 "read": true, 00:24:03.135 "write": true, 00:24:03.135 "unmap": true, 00:24:03.135 "write_zeroes": true, 00:24:03.135 "flush": true, 00:24:03.135 "reset": true, 00:24:03.135 "compare": false, 00:24:03.135 "compare_and_write": false, 00:24:03.135 "abort": true, 00:24:03.135 "nvme_admin": false, 00:24:03.135 "nvme_io": false 00:24:03.135 }, 00:24:03.135 "memory_domains": [ 00:24:03.135 { 00:24:03.135 "dma_device_id": "system", 00:24:03.135 "dma_device_type": 1 00:24:03.135 }, 00:24:03.135 { 00:24:03.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.135 "dma_device_type": 2 00:24:03.135 } 00:24:03.135 ], 00:24:03.135 "driver_specific": {} 00:24:03.135 } 00:24:03.135 ] 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.135 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.392 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.392 "name": "Existed_Raid", 00:24:03.392 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:03.392 "strip_size_kb": 64, 00:24:03.392 "state": "configuring", 00:24:03.392 "raid_level": "raid0", 00:24:03.392 "superblock": true, 00:24:03.392 "num_base_bdevs": 4, 00:24:03.392 "num_base_bdevs_discovered": 3, 00:24:03.392 "num_base_bdevs_operational": 4, 00:24:03.392 "base_bdevs_list": [ 00:24:03.392 { 00:24:03.392 "name": "BaseBdev1", 00:24:03.392 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:24:03.392 "is_configured": true, 00:24:03.392 "data_offset": 2048, 00:24:03.392 "data_size": 63488 00:24:03.392 }, 00:24:03.392 { 00:24:03.392 "name": "BaseBdev2", 00:24:03.392 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:03.392 "is_configured": true, 00:24:03.392 "data_offset": 2048, 00:24:03.392 "data_size": 63488 00:24:03.392 }, 00:24:03.392 { 00:24:03.392 "name": "BaseBdev3", 00:24:03.392 "uuid": "123b78da-1357-11ef-8e8f-9dd684e56d79", 00:24:03.392 "is_configured": true, 00:24:03.392 "data_offset": 2048, 00:24:03.392 "data_size": 63488 00:24:03.392 }, 00:24:03.392 { 00:24:03.392 "name": "BaseBdev4", 00:24:03.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.392 "is_configured": false, 00:24:03.392 "data_offset": 0, 00:24:03.392 "data_size": 0 00:24:03.392 } 00:24:03.392 ] 00:24:03.392 }' 00:24:03.392 07:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.392 07:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:03.650 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:03.908 [2024-05-16 07:36:57.242618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:03.908 [2024-05-16 07:36:57.242691] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e9a3a00 00:24:03.908 [2024-05-16 07:36:57.242695] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:03.908 [2024-05-16 07:36:57.242712] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ea06ec0 00:24:03.908 [2024-05-16 07:36:57.242750] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e9a3a00 00:24:03.908 [2024-05-16 07:36:57.242753] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e9a3a00 00:24:03.908 [2024-05-16 07:36:57.242768] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.908 BaseBdev4 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:03.908 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:04.168 [ 00:24:04.168 { 00:24:04.168 "name": "BaseBdev4", 00:24:04.168 "aliases": [ 00:24:04.168 "12edb415-1357-11ef-8e8f-9dd684e56d79" 00:24:04.168 ], 00:24:04.168 "product_name": "Malloc disk", 00:24:04.168 "block_size": 512, 00:24:04.168 "num_blocks": 65536, 00:24:04.168 "uuid": "12edb415-1357-11ef-8e8f-9dd684e56d79", 00:24:04.168 "assigned_rate_limits": { 00:24:04.168 "rw_ios_per_sec": 0, 00:24:04.168 "rw_mbytes_per_sec": 0, 00:24:04.168 "r_mbytes_per_sec": 0, 00:24:04.168 "w_mbytes_per_sec": 0 00:24:04.168 }, 00:24:04.168 "claimed": true, 00:24:04.168 "claim_type": "exclusive_write", 00:24:04.168 "zoned": false, 00:24:04.168 "supported_io_types": { 00:24:04.168 "read": true, 00:24:04.168 "write": true, 00:24:04.168 "unmap": true, 00:24:04.168 "write_zeroes": true, 00:24:04.168 "flush": true, 00:24:04.168 "reset": true, 00:24:04.168 "compare": false, 00:24:04.168 "compare_and_write": false, 00:24:04.168 "abort": true, 00:24:04.168 "nvme_admin": false, 00:24:04.168 "nvme_io": false 00:24:04.168 }, 00:24:04.168 "memory_domains": [ 00:24:04.168 { 00:24:04.168 "dma_device_id": "system", 00:24:04.168 "dma_device_type": 1 00:24:04.168 }, 00:24:04.168 { 00:24:04.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.168 "dma_device_type": 2 00:24:04.168 } 00:24:04.168 ], 00:24:04.168 "driver_specific": {} 00:24:04.168 } 00:24:04.168 ] 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:04.168 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.427 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.685 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.685 "name": "Existed_Raid", 00:24:04.685 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:04.685 "strip_size_kb": 64, 00:24:04.685 "state": "online", 00:24:04.685 "raid_level": "raid0", 00:24:04.685 "superblock": true, 00:24:04.686 "num_base_bdevs": 4, 00:24:04.686 "num_base_bdevs_discovered": 4, 00:24:04.686 "num_base_bdevs_operational": 4, 00:24:04.686 "base_bdevs_list": [ 00:24:04.686 { 00:24:04.686 "name": "BaseBdev1", 00:24:04.686 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:24:04.686 "is_configured": true, 00:24:04.686 "data_offset": 2048, 00:24:04.686 "data_size": 63488 00:24:04.686 }, 00:24:04.686 { 00:24:04.686 "name": "BaseBdev2", 00:24:04.686 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:04.686 "is_configured": true, 00:24:04.686 "data_offset": 2048, 00:24:04.686 "data_size": 63488 00:24:04.686 }, 00:24:04.686 { 00:24:04.686 "name": "BaseBdev3", 00:24:04.686 "uuid": "123b78da-1357-11ef-8e8f-9dd684e56d79", 00:24:04.686 "is_configured": true, 00:24:04.686 "data_offset": 2048, 00:24:04.686 "data_size": 63488 00:24:04.686 }, 00:24:04.686 { 00:24:04.686 "name": "BaseBdev4", 00:24:04.686 "uuid": "12edb415-1357-11ef-8e8f-9dd684e56d79", 00:24:04.686 "is_configured": true, 00:24:04.686 "data_offset": 2048, 00:24:04.686 "data_size": 63488 00:24:04.686 } 00:24:04.686 ] 00:24:04.686 }' 00:24:04.686 07:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.686 07:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:04.944 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:05.203 [2024-05-16 07:36:58.690581] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:05.203 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:05.203 "name": "Existed_Raid", 00:24:05.203 "aliases": [ 00:24:05.203 "10f45132-1357-11ef-8e8f-9dd684e56d79" 00:24:05.203 ], 00:24:05.203 "product_name": "Raid Volume", 00:24:05.203 "block_size": 512, 00:24:05.203 "num_blocks": 253952, 00:24:05.203 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:05.203 "assigned_rate_limits": { 00:24:05.203 "rw_ios_per_sec": 0, 00:24:05.203 "rw_mbytes_per_sec": 0, 00:24:05.203 "r_mbytes_per_sec": 0, 00:24:05.203 "w_mbytes_per_sec": 0 00:24:05.203 }, 00:24:05.203 "claimed": false, 00:24:05.203 "zoned": false, 00:24:05.203 "supported_io_types": { 00:24:05.203 "read": true, 00:24:05.203 "write": true, 00:24:05.203 "unmap": true, 00:24:05.203 "write_zeroes": true, 00:24:05.203 "flush": true, 00:24:05.203 "reset": true, 00:24:05.203 "compare": false, 00:24:05.203 "compare_and_write": false, 00:24:05.203 "abort": false, 00:24:05.203 "nvme_admin": false, 00:24:05.203 "nvme_io": false 00:24:05.203 }, 00:24:05.203 "memory_domains": [ 00:24:05.203 { 00:24:05.203 "dma_device_id": "system", 00:24:05.203 "dma_device_type": 1 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.203 "dma_device_type": 2 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "system", 00:24:05.203 "dma_device_type": 1 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.203 "dma_device_type": 2 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "system", 00:24:05.203 "dma_device_type": 1 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.203 "dma_device_type": 2 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "system", 00:24:05.203 "dma_device_type": 1 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.203 "dma_device_type": 2 00:24:05.203 } 00:24:05.203 ], 00:24:05.203 "driver_specific": { 00:24:05.203 "raid": { 00:24:05.203 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:05.203 "strip_size_kb": 64, 00:24:05.203 "state": "online", 00:24:05.203 "raid_level": "raid0", 00:24:05.203 "superblock": true, 00:24:05.203 "num_base_bdevs": 4, 00:24:05.203 "num_base_bdevs_discovered": 4, 00:24:05.203 "num_base_bdevs_operational": 4, 00:24:05.203 "base_bdevs_list": [ 00:24:05.203 { 00:24:05.203 "name": "BaseBdev1", 00:24:05.203 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:24:05.203 "is_configured": true, 00:24:05.203 "data_offset": 2048, 00:24:05.203 "data_size": 63488 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "name": "BaseBdev2", 00:24:05.203 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:05.203 "is_configured": true, 00:24:05.203 "data_offset": 2048, 00:24:05.203 "data_size": 63488 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "name": "BaseBdev3", 00:24:05.203 "uuid": "123b78da-1357-11ef-8e8f-9dd684e56d79", 00:24:05.203 "is_configured": true, 00:24:05.203 "data_offset": 2048, 00:24:05.203 "data_size": 63488 00:24:05.203 }, 00:24:05.203 { 00:24:05.203 "name": "BaseBdev4", 00:24:05.203 "uuid": "12edb415-1357-11ef-8e8f-9dd684e56d79", 00:24:05.203 "is_configured": true, 00:24:05.203 "data_offset": 2048, 00:24:05.203 "data_size": 63488 00:24:05.203 } 00:24:05.203 ] 00:24:05.203 } 00:24:05.203 } 00:24:05.203 }' 00:24:05.203 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:05.203 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:05.203 BaseBdev2 00:24:05.203 BaseBdev3 00:24:05.203 BaseBdev4' 00:24:05.203 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:05.203 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:05.203 07:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:05.771 "name": "BaseBdev1", 00:24:05.771 "aliases": [ 00:24:05.771 "101b07eb-1357-11ef-8e8f-9dd684e56d79" 00:24:05.771 ], 00:24:05.771 "product_name": "Malloc disk", 00:24:05.771 "block_size": 512, 00:24:05.771 "num_blocks": 65536, 00:24:05.771 "uuid": "101b07eb-1357-11ef-8e8f-9dd684e56d79", 00:24:05.771 "assigned_rate_limits": { 00:24:05.771 "rw_ios_per_sec": 0, 00:24:05.771 "rw_mbytes_per_sec": 0, 00:24:05.771 "r_mbytes_per_sec": 0, 00:24:05.771 "w_mbytes_per_sec": 0 00:24:05.771 }, 00:24:05.771 "claimed": true, 00:24:05.771 "claim_type": "exclusive_write", 00:24:05.771 "zoned": false, 00:24:05.771 "supported_io_types": { 00:24:05.771 "read": true, 00:24:05.771 "write": true, 00:24:05.771 "unmap": true, 00:24:05.771 "write_zeroes": true, 00:24:05.771 "flush": true, 00:24:05.771 "reset": true, 00:24:05.771 "compare": false, 00:24:05.771 "compare_and_write": false, 00:24:05.771 "abort": true, 00:24:05.771 "nvme_admin": false, 00:24:05.771 "nvme_io": false 00:24:05.771 }, 00:24:05.771 "memory_domains": [ 00:24:05.771 { 00:24:05.771 "dma_device_id": "system", 00:24:05.771 "dma_device_type": 1 00:24:05.771 }, 00:24:05.771 { 00:24:05.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.771 "dma_device_type": 2 00:24:05.771 } 00:24:05.771 ], 00:24:05.771 "driver_specific": {} 00:24:05.771 }' 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:05.771 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:06.030 "name": "BaseBdev2", 00:24:06.030 "aliases": [ 00:24:06.030 "1179fcaa-1357-11ef-8e8f-9dd684e56d79" 00:24:06.030 ], 00:24:06.030 "product_name": "Malloc disk", 00:24:06.030 "block_size": 512, 00:24:06.030 "num_blocks": 65536, 00:24:06.030 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:06.030 "assigned_rate_limits": { 00:24:06.030 "rw_ios_per_sec": 0, 00:24:06.030 "rw_mbytes_per_sec": 0, 00:24:06.030 "r_mbytes_per_sec": 0, 00:24:06.030 "w_mbytes_per_sec": 0 00:24:06.030 }, 00:24:06.030 "claimed": true, 00:24:06.030 "claim_type": "exclusive_write", 00:24:06.030 "zoned": false, 00:24:06.030 "supported_io_types": { 00:24:06.030 "read": true, 00:24:06.030 "write": true, 00:24:06.030 "unmap": true, 00:24:06.030 "write_zeroes": true, 00:24:06.030 "flush": true, 00:24:06.030 "reset": true, 00:24:06.030 "compare": false, 00:24:06.030 "compare_and_write": false, 00:24:06.030 "abort": true, 00:24:06.030 "nvme_admin": false, 00:24:06.030 "nvme_io": false 00:24:06.030 }, 00:24:06.030 "memory_domains": [ 00:24:06.030 { 00:24:06.030 "dma_device_id": "system", 00:24:06.030 "dma_device_type": 1 00:24:06.030 }, 00:24:06.030 { 00:24:06.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.030 "dma_device_type": 2 00:24:06.030 } 00:24:06.030 ], 00:24:06.030 "driver_specific": {} 00:24:06.030 }' 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:06.030 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:06.289 "name": "BaseBdev3", 00:24:06.289 "aliases": [ 00:24:06.289 "123b78da-1357-11ef-8e8f-9dd684e56d79" 00:24:06.289 ], 00:24:06.289 "product_name": "Malloc disk", 00:24:06.289 "block_size": 512, 00:24:06.289 "num_blocks": 65536, 00:24:06.289 "uuid": "123b78da-1357-11ef-8e8f-9dd684e56d79", 00:24:06.289 "assigned_rate_limits": { 00:24:06.289 "rw_ios_per_sec": 0, 00:24:06.289 "rw_mbytes_per_sec": 0, 00:24:06.289 "r_mbytes_per_sec": 0, 00:24:06.289 "w_mbytes_per_sec": 0 00:24:06.289 }, 00:24:06.289 "claimed": true, 00:24:06.289 "claim_type": "exclusive_write", 00:24:06.289 "zoned": false, 00:24:06.289 "supported_io_types": { 00:24:06.289 "read": true, 00:24:06.289 "write": true, 00:24:06.289 "unmap": true, 00:24:06.289 "write_zeroes": true, 00:24:06.289 "flush": true, 00:24:06.289 "reset": true, 00:24:06.289 "compare": false, 00:24:06.289 "compare_and_write": false, 00:24:06.289 "abort": true, 00:24:06.289 "nvme_admin": false, 00:24:06.289 "nvme_io": false 00:24:06.289 }, 00:24:06.289 "memory_domains": [ 00:24:06.289 { 00:24:06.289 "dma_device_id": "system", 00:24:06.289 "dma_device_type": 1 00:24:06.289 }, 00:24:06.289 { 00:24:06.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.289 "dma_device_type": 2 00:24:06.289 } 00:24:06.289 ], 00:24:06.289 "driver_specific": {} 00:24:06.289 }' 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:06.289 07:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:06.547 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:06.547 "name": "BaseBdev4", 00:24:06.547 "aliases": [ 00:24:06.547 "12edb415-1357-11ef-8e8f-9dd684e56d79" 00:24:06.547 ], 00:24:06.547 "product_name": "Malloc disk", 00:24:06.547 "block_size": 512, 00:24:06.547 "num_blocks": 65536, 00:24:06.547 "uuid": "12edb415-1357-11ef-8e8f-9dd684e56d79", 00:24:06.547 "assigned_rate_limits": { 00:24:06.548 "rw_ios_per_sec": 0, 00:24:06.548 "rw_mbytes_per_sec": 0, 00:24:06.548 "r_mbytes_per_sec": 0, 00:24:06.548 "w_mbytes_per_sec": 0 00:24:06.548 }, 00:24:06.548 "claimed": true, 00:24:06.548 "claim_type": "exclusive_write", 00:24:06.548 "zoned": false, 00:24:06.548 "supported_io_types": { 00:24:06.548 "read": true, 00:24:06.548 "write": true, 00:24:06.548 "unmap": true, 00:24:06.548 "write_zeroes": true, 00:24:06.548 "flush": true, 00:24:06.548 "reset": true, 00:24:06.548 "compare": false, 00:24:06.548 "compare_and_write": false, 00:24:06.548 "abort": true, 00:24:06.548 "nvme_admin": false, 00:24:06.548 "nvme_io": false 00:24:06.548 }, 00:24:06.548 "memory_domains": [ 00:24:06.548 { 00:24:06.548 "dma_device_id": "system", 00:24:06.548 "dma_device_type": 1 00:24:06.548 }, 00:24:06.548 { 00:24:06.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.548 "dma_device_type": 2 00:24:06.548 } 00:24:06.548 ], 00:24:06.548 "driver_specific": {} 00:24:06.548 }' 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.548 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:06.806 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:06.806 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:06.806 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:07.065 [2024-05-16 07:37:00.386582] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.065 [2024-05-16 07:37:00.386608] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:07.065 [2024-05-16 07:37:00.386622] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid0 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.065 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.324 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.324 "name": "Existed_Raid", 00:24:07.324 "uuid": "10f45132-1357-11ef-8e8f-9dd684e56d79", 00:24:07.324 "strip_size_kb": 64, 00:24:07.324 "state": "offline", 00:24:07.324 "raid_level": "raid0", 00:24:07.324 "superblock": true, 00:24:07.324 "num_base_bdevs": 4, 00:24:07.324 "num_base_bdevs_discovered": 3, 00:24:07.324 "num_base_bdevs_operational": 3, 00:24:07.324 "base_bdevs_list": [ 00:24:07.324 { 00:24:07.324 "name": null, 00:24:07.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.324 "is_configured": false, 00:24:07.324 "data_offset": 2048, 00:24:07.324 "data_size": 63488 00:24:07.324 }, 00:24:07.324 { 00:24:07.324 "name": "BaseBdev2", 00:24:07.324 "uuid": "1179fcaa-1357-11ef-8e8f-9dd684e56d79", 00:24:07.324 "is_configured": true, 00:24:07.324 "data_offset": 2048, 00:24:07.324 "data_size": 63488 00:24:07.324 }, 00:24:07.324 { 00:24:07.324 "name": "BaseBdev3", 00:24:07.324 "uuid": "123b78da-1357-11ef-8e8f-9dd684e56d79", 00:24:07.324 "is_configured": true, 00:24:07.324 "data_offset": 2048, 00:24:07.324 "data_size": 63488 00:24:07.324 }, 00:24:07.324 { 00:24:07.324 "name": "BaseBdev4", 00:24:07.324 "uuid": "12edb415-1357-11ef-8e8f-9dd684e56d79", 00:24:07.324 "is_configured": true, 00:24:07.324 "data_offset": 2048, 00:24:07.324 "data_size": 63488 00:24:07.324 } 00:24:07.324 ] 00:24:07.324 }' 00:24:07.324 07:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.324 07:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.583 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:07.583 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:07.583 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:07.583 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.150 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:08.150 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:08.150 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:08.150 [2024-05-16 07:37:01.687490] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:08.410 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:08.410 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:08.410 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.410 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:08.669 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:08.669 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:08.669 07:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:08.928 [2024-05-16 07:37:02.260414] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:08.928 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:08.928 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:08.928 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.928 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:09.186 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:09.186 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:09.186 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:09.444 [2024-05-16 07:37:02.777559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:09.444 [2024-05-16 07:37:02.777600] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e9a3a00 name Existed_Raid, state offline 00:24:09.444 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:09.444 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:09.444 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.444 07:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:24:09.703 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:24:09.703 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:24:09.703 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:24:09.703 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:24:09.703 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:09.703 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:09.961 BaseBdev2 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:09.961 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:10.272 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:10.531 [ 00:24:10.531 { 00:24:10.531 "name": "BaseBdev2", 00:24:10.531 "aliases": [ 00:24:10.531 "168cd56f-1357-11ef-8e8f-9dd684e56d79" 00:24:10.531 ], 00:24:10.531 "product_name": "Malloc disk", 00:24:10.531 "block_size": 512, 00:24:10.531 "num_blocks": 65536, 00:24:10.531 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:10.531 "assigned_rate_limits": { 00:24:10.531 "rw_ios_per_sec": 0, 00:24:10.531 "rw_mbytes_per_sec": 0, 00:24:10.531 "r_mbytes_per_sec": 0, 00:24:10.531 "w_mbytes_per_sec": 0 00:24:10.531 }, 00:24:10.531 "claimed": false, 00:24:10.531 "zoned": false, 00:24:10.531 "supported_io_types": { 00:24:10.531 "read": true, 00:24:10.531 "write": true, 00:24:10.531 "unmap": true, 00:24:10.531 "write_zeroes": true, 00:24:10.531 "flush": true, 00:24:10.531 "reset": true, 00:24:10.531 "compare": false, 00:24:10.531 "compare_and_write": false, 00:24:10.531 "abort": true, 00:24:10.531 "nvme_admin": false, 00:24:10.531 "nvme_io": false 00:24:10.531 }, 00:24:10.531 "memory_domains": [ 00:24:10.531 { 00:24:10.531 "dma_device_id": "system", 00:24:10.531 "dma_device_type": 1 00:24:10.531 }, 00:24:10.531 { 00:24:10.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.531 "dma_device_type": 2 00:24:10.531 } 00:24:10.531 ], 00:24:10.531 "driver_specific": {} 00:24:10.531 } 00:24:10.531 ] 00:24:10.531 07:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:10.531 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:24:10.531 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:10.531 07:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:10.531 BaseBdev3 00:24:10.789 07:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:24:10.790 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:10.790 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:10.790 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:10.790 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:10.790 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:10.790 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:11.048 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:11.307 [ 00:24:11.307 { 00:24:11.307 "name": "BaseBdev3", 00:24:11.307 "aliases": [ 00:24:11.307 "170168f1-1357-11ef-8e8f-9dd684e56d79" 00:24:11.307 ], 00:24:11.307 "product_name": "Malloc disk", 00:24:11.307 "block_size": 512, 00:24:11.307 "num_blocks": 65536, 00:24:11.307 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:11.307 "assigned_rate_limits": { 00:24:11.307 "rw_ios_per_sec": 0, 00:24:11.307 "rw_mbytes_per_sec": 0, 00:24:11.307 "r_mbytes_per_sec": 0, 00:24:11.307 "w_mbytes_per_sec": 0 00:24:11.307 }, 00:24:11.307 "claimed": false, 00:24:11.307 "zoned": false, 00:24:11.307 "supported_io_types": { 00:24:11.307 "read": true, 00:24:11.307 "write": true, 00:24:11.307 "unmap": true, 00:24:11.307 "write_zeroes": true, 00:24:11.307 "flush": true, 00:24:11.307 "reset": true, 00:24:11.307 "compare": false, 00:24:11.307 "compare_and_write": false, 00:24:11.307 "abort": true, 00:24:11.307 "nvme_admin": false, 00:24:11.307 "nvme_io": false 00:24:11.307 }, 00:24:11.307 "memory_domains": [ 00:24:11.307 { 00:24:11.307 "dma_device_id": "system", 00:24:11.307 "dma_device_type": 1 00:24:11.307 }, 00:24:11.307 { 00:24:11.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.307 "dma_device_type": 2 00:24:11.307 } 00:24:11.307 ], 00:24:11.307 "driver_specific": {} 00:24:11.307 } 00:24:11.307 ] 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:11.307 BaseBdev4 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:11.307 07:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:11.567 07:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:11.825 [ 00:24:11.825 { 00:24:11.825 "name": "BaseBdev4", 00:24:11.825 "aliases": [ 00:24:11.825 "1772efc8-1357-11ef-8e8f-9dd684e56d79" 00:24:11.825 ], 00:24:11.825 "product_name": "Malloc disk", 00:24:11.825 "block_size": 512, 00:24:11.825 "num_blocks": 65536, 00:24:11.825 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:11.825 "assigned_rate_limits": { 00:24:11.825 "rw_ios_per_sec": 0, 00:24:11.825 "rw_mbytes_per_sec": 0, 00:24:11.825 "r_mbytes_per_sec": 0, 00:24:11.825 "w_mbytes_per_sec": 0 00:24:11.825 }, 00:24:11.825 "claimed": false, 00:24:11.825 "zoned": false, 00:24:11.825 "supported_io_types": { 00:24:11.825 "read": true, 00:24:11.825 "write": true, 00:24:11.825 "unmap": true, 00:24:11.825 "write_zeroes": true, 00:24:11.825 "flush": true, 00:24:11.825 "reset": true, 00:24:11.825 "compare": false, 00:24:11.825 "compare_and_write": false, 00:24:11.825 "abort": true, 00:24:11.825 "nvme_admin": false, 00:24:11.825 "nvme_io": false 00:24:11.825 }, 00:24:11.825 "memory_domains": [ 00:24:11.825 { 00:24:11.825 "dma_device_id": "system", 00:24:11.825 "dma_device_type": 1 00:24:11.825 }, 00:24:11.825 { 00:24:11.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.825 "dma_device_type": 2 00:24:11.825 } 00:24:11.825 ], 00:24:11.825 "driver_specific": {} 00:24:11.825 } 00:24:11.825 ] 00:24:11.825 07:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:11.825 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:24:11.825 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:11.825 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:12.084 [2024-05-16 07:37:05.502572] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:12.084 [2024-05-16 07:37:05.502624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:12.084 [2024-05-16 07:37:05.502633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.084 [2024-05-16 07:37:05.503069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:12.084 [2024-05-16 07:37:05.503079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.084 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.343 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.343 "name": "Existed_Raid", 00:24:12.343 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:12.343 "strip_size_kb": 64, 00:24:12.343 "state": "configuring", 00:24:12.343 "raid_level": "raid0", 00:24:12.343 "superblock": true, 00:24:12.343 "num_base_bdevs": 4, 00:24:12.343 "num_base_bdevs_discovered": 3, 00:24:12.343 "num_base_bdevs_operational": 4, 00:24:12.343 "base_bdevs_list": [ 00:24:12.343 { 00:24:12.343 "name": "BaseBdev1", 00:24:12.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.343 "is_configured": false, 00:24:12.343 "data_offset": 0, 00:24:12.343 "data_size": 0 00:24:12.343 }, 00:24:12.343 { 00:24:12.343 "name": "BaseBdev2", 00:24:12.343 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:12.343 "is_configured": true, 00:24:12.343 "data_offset": 2048, 00:24:12.343 "data_size": 63488 00:24:12.343 }, 00:24:12.343 { 00:24:12.343 "name": "BaseBdev3", 00:24:12.343 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:12.343 "is_configured": true, 00:24:12.343 "data_offset": 2048, 00:24:12.343 "data_size": 63488 00:24:12.343 }, 00:24:12.343 { 00:24:12.343 "name": "BaseBdev4", 00:24:12.343 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:12.343 "is_configured": true, 00:24:12.343 "data_offset": 2048, 00:24:12.343 "data_size": 63488 00:24:12.343 } 00:24:12.343 ] 00:24:12.343 }' 00:24:12.343 07:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.343 07:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.601 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:13.170 [2024-05-16 07:37:06.422624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.170 "name": "Existed_Raid", 00:24:13.170 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:13.170 "strip_size_kb": 64, 00:24:13.170 "state": "configuring", 00:24:13.170 "raid_level": "raid0", 00:24:13.170 "superblock": true, 00:24:13.170 "num_base_bdevs": 4, 00:24:13.170 "num_base_bdevs_discovered": 2, 00:24:13.170 "num_base_bdevs_operational": 4, 00:24:13.170 "base_bdevs_list": [ 00:24:13.170 { 00:24:13.170 "name": "BaseBdev1", 00:24:13.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.170 "is_configured": false, 00:24:13.170 "data_offset": 0, 00:24:13.170 "data_size": 0 00:24:13.170 }, 00:24:13.170 { 00:24:13.170 "name": null, 00:24:13.170 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:13.170 "is_configured": false, 00:24:13.170 "data_offset": 2048, 00:24:13.170 "data_size": 63488 00:24:13.170 }, 00:24:13.170 { 00:24:13.170 "name": "BaseBdev3", 00:24:13.170 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:13.170 "is_configured": true, 00:24:13.170 "data_offset": 2048, 00:24:13.170 "data_size": 63488 00:24:13.170 }, 00:24:13.170 { 00:24:13.170 "name": "BaseBdev4", 00:24:13.170 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:13.170 "is_configured": true, 00:24:13.170 "data_offset": 2048, 00:24:13.170 "data_size": 63488 00:24:13.170 } 00:24:13.170 ] 00:24:13.170 }' 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.170 07:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.428 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:13.428 07:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.995 07:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:24:13.995 07:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:14.255 [2024-05-16 07:37:07.570790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:14.255 BaseBdev1 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:14.255 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:14.514 07:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:14.773 [ 00:24:14.773 { 00:24:14.773 "name": "BaseBdev1", 00:24:14.773 "aliases": [ 00:24:14.773 "1915a7cd-1357-11ef-8e8f-9dd684e56d79" 00:24:14.773 ], 00:24:14.773 "product_name": "Malloc disk", 00:24:14.773 "block_size": 512, 00:24:14.773 "num_blocks": 65536, 00:24:14.773 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:14.773 "assigned_rate_limits": { 00:24:14.773 "rw_ios_per_sec": 0, 00:24:14.773 "rw_mbytes_per_sec": 0, 00:24:14.773 "r_mbytes_per_sec": 0, 00:24:14.773 "w_mbytes_per_sec": 0 00:24:14.773 }, 00:24:14.773 "claimed": true, 00:24:14.773 "claim_type": "exclusive_write", 00:24:14.773 "zoned": false, 00:24:14.773 "supported_io_types": { 00:24:14.773 "read": true, 00:24:14.773 "write": true, 00:24:14.773 "unmap": true, 00:24:14.773 "write_zeroes": true, 00:24:14.773 "flush": true, 00:24:14.773 "reset": true, 00:24:14.773 "compare": false, 00:24:14.773 "compare_and_write": false, 00:24:14.773 "abort": true, 00:24:14.773 "nvme_admin": false, 00:24:14.773 "nvme_io": false 00:24:14.773 }, 00:24:14.773 "memory_domains": [ 00:24:14.773 { 00:24:14.773 "dma_device_id": "system", 00:24:14.773 "dma_device_type": 1 00:24:14.773 }, 00:24:14.773 { 00:24:14.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.773 "dma_device_type": 2 00:24:14.773 } 00:24:14.773 ], 00:24:14.773 "driver_specific": {} 00:24:14.773 } 00:24:14.773 ] 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.773 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.033 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:15.033 "name": "Existed_Raid", 00:24:15.033 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:15.033 "strip_size_kb": 64, 00:24:15.033 "state": "configuring", 00:24:15.033 "raid_level": "raid0", 00:24:15.033 "superblock": true, 00:24:15.033 "num_base_bdevs": 4, 00:24:15.033 "num_base_bdevs_discovered": 3, 00:24:15.033 "num_base_bdevs_operational": 4, 00:24:15.033 "base_bdevs_list": [ 00:24:15.033 { 00:24:15.033 "name": "BaseBdev1", 00:24:15.033 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:15.033 "is_configured": true, 00:24:15.033 "data_offset": 2048, 00:24:15.033 "data_size": 63488 00:24:15.033 }, 00:24:15.033 { 00:24:15.033 "name": null, 00:24:15.033 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:15.033 "is_configured": false, 00:24:15.033 "data_offset": 2048, 00:24:15.033 "data_size": 63488 00:24:15.033 }, 00:24:15.033 { 00:24:15.033 "name": "BaseBdev3", 00:24:15.033 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:15.033 "is_configured": true, 00:24:15.033 "data_offset": 2048, 00:24:15.033 "data_size": 63488 00:24:15.033 }, 00:24:15.033 { 00:24:15.033 "name": "BaseBdev4", 00:24:15.033 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:15.033 "is_configured": true, 00:24:15.033 "data_offset": 2048, 00:24:15.033 "data_size": 63488 00:24:15.033 } 00:24:15.033 ] 00:24:15.033 }' 00:24:15.033 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:15.033 07:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.292 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:15.292 07:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.551 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:15.551 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:15.809 [2024-05-16 07:37:09.302744] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.810 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.068 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.068 "name": "Existed_Raid", 00:24:16.068 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:16.068 "strip_size_kb": 64, 00:24:16.068 "state": "configuring", 00:24:16.068 "raid_level": "raid0", 00:24:16.068 "superblock": true, 00:24:16.068 "num_base_bdevs": 4, 00:24:16.068 "num_base_bdevs_discovered": 2, 00:24:16.068 "num_base_bdevs_operational": 4, 00:24:16.068 "base_bdevs_list": [ 00:24:16.068 { 00:24:16.068 "name": "BaseBdev1", 00:24:16.068 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:16.068 "is_configured": true, 00:24:16.068 "data_offset": 2048, 00:24:16.068 "data_size": 63488 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": null, 00:24:16.068 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:16.068 "is_configured": false, 00:24:16.068 "data_offset": 2048, 00:24:16.068 "data_size": 63488 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": null, 00:24:16.068 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:16.068 "is_configured": false, 00:24:16.068 "data_offset": 2048, 00:24:16.068 "data_size": 63488 00:24:16.068 }, 00:24:16.068 { 00:24:16.068 "name": "BaseBdev4", 00:24:16.068 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:16.068 "is_configured": true, 00:24:16.068 "data_offset": 2048, 00:24:16.068 "data_size": 63488 00:24:16.068 } 00:24:16.068 ] 00:24:16.068 }' 00:24:16.068 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.068 07:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.633 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.633 07:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:16.891 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:24:16.891 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:16.891 [2024-05-16 07:37:10.446801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.150 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.408 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:17.408 "name": "Existed_Raid", 00:24:17.408 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:17.408 "strip_size_kb": 64, 00:24:17.408 "state": "configuring", 00:24:17.408 "raid_level": "raid0", 00:24:17.408 "superblock": true, 00:24:17.408 "num_base_bdevs": 4, 00:24:17.408 "num_base_bdevs_discovered": 3, 00:24:17.408 "num_base_bdevs_operational": 4, 00:24:17.408 "base_bdevs_list": [ 00:24:17.408 { 00:24:17.408 "name": "BaseBdev1", 00:24:17.408 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:17.408 "is_configured": true, 00:24:17.408 "data_offset": 2048, 00:24:17.408 "data_size": 63488 00:24:17.408 }, 00:24:17.408 { 00:24:17.408 "name": null, 00:24:17.408 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:17.408 "is_configured": false, 00:24:17.408 "data_offset": 2048, 00:24:17.408 "data_size": 63488 00:24:17.408 }, 00:24:17.408 { 00:24:17.408 "name": "BaseBdev3", 00:24:17.408 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:17.408 "is_configured": true, 00:24:17.408 "data_offset": 2048, 00:24:17.408 "data_size": 63488 00:24:17.408 }, 00:24:17.408 { 00:24:17.408 "name": "BaseBdev4", 00:24:17.408 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:17.408 "is_configured": true, 00:24:17.408 "data_offset": 2048, 00:24:17.408 "data_size": 63488 00:24:17.408 } 00:24:17.408 ] 00:24:17.408 }' 00:24:17.408 07:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:17.408 07:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.667 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.667 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:17.925 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:24:17.925 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:18.183 [2024-05-16 07:37:11.594815] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.183 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.442 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.442 "name": "Existed_Raid", 00:24:18.442 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:18.442 "strip_size_kb": 64, 00:24:18.442 "state": "configuring", 00:24:18.442 "raid_level": "raid0", 00:24:18.442 "superblock": true, 00:24:18.442 "num_base_bdevs": 4, 00:24:18.442 "num_base_bdevs_discovered": 2, 00:24:18.442 "num_base_bdevs_operational": 4, 00:24:18.442 "base_bdevs_list": [ 00:24:18.442 { 00:24:18.442 "name": null, 00:24:18.442 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:18.442 "is_configured": false, 00:24:18.442 "data_offset": 2048, 00:24:18.442 "data_size": 63488 00:24:18.442 }, 00:24:18.442 { 00:24:18.442 "name": null, 00:24:18.442 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:18.442 "is_configured": false, 00:24:18.442 "data_offset": 2048, 00:24:18.442 "data_size": 63488 00:24:18.442 }, 00:24:18.442 { 00:24:18.442 "name": "BaseBdev3", 00:24:18.442 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:18.442 "is_configured": true, 00:24:18.442 "data_offset": 2048, 00:24:18.442 "data_size": 63488 00:24:18.442 }, 00:24:18.442 { 00:24:18.442 "name": "BaseBdev4", 00:24:18.442 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:18.442 "is_configured": true, 00:24:18.442 "data_offset": 2048, 00:24:18.442 "data_size": 63488 00:24:18.442 } 00:24:18.442 ] 00:24:18.442 }' 00:24:18.442 07:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.442 07:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.701 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.701 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:19.268 [2024-05-16 07:37:12.775729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.268 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.269 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.269 07:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.527 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.527 "name": "Existed_Raid", 00:24:19.527 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:19.527 "strip_size_kb": 64, 00:24:19.527 "state": "configuring", 00:24:19.527 "raid_level": "raid0", 00:24:19.527 "superblock": true, 00:24:19.527 "num_base_bdevs": 4, 00:24:19.527 "num_base_bdevs_discovered": 3, 00:24:19.527 "num_base_bdevs_operational": 4, 00:24:19.527 "base_bdevs_list": [ 00:24:19.527 { 00:24:19.527 "name": null, 00:24:19.528 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:19.528 "is_configured": false, 00:24:19.528 "data_offset": 2048, 00:24:19.528 "data_size": 63488 00:24:19.528 }, 00:24:19.528 { 00:24:19.528 "name": "BaseBdev2", 00:24:19.528 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:19.528 "is_configured": true, 00:24:19.528 "data_offset": 2048, 00:24:19.528 "data_size": 63488 00:24:19.528 }, 00:24:19.528 { 00:24:19.528 "name": "BaseBdev3", 00:24:19.528 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:19.528 "is_configured": true, 00:24:19.528 "data_offset": 2048, 00:24:19.528 "data_size": 63488 00:24:19.528 }, 00:24:19.528 { 00:24:19.528 "name": "BaseBdev4", 00:24:19.528 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:19.528 "is_configured": true, 00:24:19.528 "data_offset": 2048, 00:24:19.528 "data_size": 63488 00:24:19.528 } 00:24:19.528 ] 00:24:19.528 }' 00:24:19.528 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.528 07:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.095 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:20.353 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:24:20.353 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.353 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:20.611 07:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1915a7cd-1357-11ef-8e8f-9dd684e56d79 00:24:20.870 [2024-05-16 07:37:14.204095] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:20.870 [2024-05-16 07:37:14.204149] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e9a3f00 00:24:20.870 [2024-05-16 07:37:14.204154] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:20.870 [2024-05-16 07:37:14.204172] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ea06e20 00:24:20.870 [2024-05-16 07:37:14.204217] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e9a3f00 00:24:20.870 [2024-05-16 07:37:14.204220] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e9a3f00 00:24:20.870 [2024-05-16 07:37:14.204237] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.870 NewBaseBdev 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:20.870 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:21.128 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:21.387 [ 00:24:21.387 { 00:24:21.387 "name": "NewBaseBdev", 00:24:21.387 "aliases": [ 00:24:21.387 "1915a7cd-1357-11ef-8e8f-9dd684e56d79" 00:24:21.387 ], 00:24:21.387 "product_name": "Malloc disk", 00:24:21.387 "block_size": 512, 00:24:21.387 "num_blocks": 65536, 00:24:21.387 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:21.387 "assigned_rate_limits": { 00:24:21.387 "rw_ios_per_sec": 0, 00:24:21.387 "rw_mbytes_per_sec": 0, 00:24:21.387 "r_mbytes_per_sec": 0, 00:24:21.387 "w_mbytes_per_sec": 0 00:24:21.387 }, 00:24:21.387 "claimed": true, 00:24:21.387 "claim_type": "exclusive_write", 00:24:21.387 "zoned": false, 00:24:21.387 "supported_io_types": { 00:24:21.387 "read": true, 00:24:21.387 "write": true, 00:24:21.387 "unmap": true, 00:24:21.387 "write_zeroes": true, 00:24:21.387 "flush": true, 00:24:21.387 "reset": true, 00:24:21.387 "compare": false, 00:24:21.387 "compare_and_write": false, 00:24:21.387 "abort": true, 00:24:21.387 "nvme_admin": false, 00:24:21.387 "nvme_io": false 00:24:21.387 }, 00:24:21.387 "memory_domains": [ 00:24:21.387 { 00:24:21.387 "dma_device_id": "system", 00:24:21.387 "dma_device_type": 1 00:24:21.387 }, 00:24:21.387 { 00:24:21.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.387 "dma_device_type": 2 00:24:21.387 } 00:24:21.387 ], 00:24:21.387 "driver_specific": {} 00:24:21.387 } 00:24:21.387 ] 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.387 07:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.646 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.646 "name": "Existed_Raid", 00:24:21.646 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:21.646 "strip_size_kb": 64, 00:24:21.646 "state": "online", 00:24:21.646 "raid_level": "raid0", 00:24:21.646 "superblock": true, 00:24:21.646 "num_base_bdevs": 4, 00:24:21.646 "num_base_bdevs_discovered": 4, 00:24:21.646 "num_base_bdevs_operational": 4, 00:24:21.646 "base_bdevs_list": [ 00:24:21.646 { 00:24:21.646 "name": "NewBaseBdev", 00:24:21.646 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:21.646 "is_configured": true, 00:24:21.646 "data_offset": 2048, 00:24:21.646 "data_size": 63488 00:24:21.646 }, 00:24:21.646 { 00:24:21.646 "name": "BaseBdev2", 00:24:21.646 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:21.646 "is_configured": true, 00:24:21.646 "data_offset": 2048, 00:24:21.646 "data_size": 63488 00:24:21.646 }, 00:24:21.646 { 00:24:21.646 "name": "BaseBdev3", 00:24:21.646 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:21.646 "is_configured": true, 00:24:21.646 "data_offset": 2048, 00:24:21.646 "data_size": 63488 00:24:21.646 }, 00:24:21.646 { 00:24:21.646 "name": "BaseBdev4", 00:24:21.646 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:21.646 "is_configured": true, 00:24:21.646 "data_offset": 2048, 00:24:21.646 "data_size": 63488 00:24:21.646 } 00:24:21.646 ] 00:24:21.646 }' 00:24:21.646 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.646 07:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:21.904 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:22.162 [2024-05-16 07:37:15.535357] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.162 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:22.162 "name": "Existed_Raid", 00:24:22.162 "aliases": [ 00:24:22.162 "17da15fa-1357-11ef-8e8f-9dd684e56d79" 00:24:22.162 ], 00:24:22.162 "product_name": "Raid Volume", 00:24:22.162 "block_size": 512, 00:24:22.162 "num_blocks": 253952, 00:24:22.162 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:22.162 "assigned_rate_limits": { 00:24:22.162 "rw_ios_per_sec": 0, 00:24:22.162 "rw_mbytes_per_sec": 0, 00:24:22.163 "r_mbytes_per_sec": 0, 00:24:22.163 "w_mbytes_per_sec": 0 00:24:22.163 }, 00:24:22.163 "claimed": false, 00:24:22.163 "zoned": false, 00:24:22.163 "supported_io_types": { 00:24:22.163 "read": true, 00:24:22.163 "write": true, 00:24:22.163 "unmap": true, 00:24:22.163 "write_zeroes": true, 00:24:22.163 "flush": true, 00:24:22.163 "reset": true, 00:24:22.163 "compare": false, 00:24:22.163 "compare_and_write": false, 00:24:22.163 "abort": false, 00:24:22.163 "nvme_admin": false, 00:24:22.163 "nvme_io": false 00:24:22.163 }, 00:24:22.163 "memory_domains": [ 00:24:22.163 { 00:24:22.163 "dma_device_id": "system", 00:24:22.163 "dma_device_type": 1 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.163 "dma_device_type": 2 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "system", 00:24:22.163 "dma_device_type": 1 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.163 "dma_device_type": 2 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "system", 00:24:22.163 "dma_device_type": 1 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.163 "dma_device_type": 2 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "system", 00:24:22.163 "dma_device_type": 1 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.163 "dma_device_type": 2 00:24:22.163 } 00:24:22.163 ], 00:24:22.163 "driver_specific": { 00:24:22.163 "raid": { 00:24:22.163 "uuid": "17da15fa-1357-11ef-8e8f-9dd684e56d79", 00:24:22.163 "strip_size_kb": 64, 00:24:22.163 "state": "online", 00:24:22.163 "raid_level": "raid0", 00:24:22.163 "superblock": true, 00:24:22.163 "num_base_bdevs": 4, 00:24:22.163 "num_base_bdevs_discovered": 4, 00:24:22.163 "num_base_bdevs_operational": 4, 00:24:22.163 "base_bdevs_list": [ 00:24:22.163 { 00:24:22.163 "name": "NewBaseBdev", 00:24:22.163 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:22.163 "is_configured": true, 00:24:22.163 "data_offset": 2048, 00:24:22.163 "data_size": 63488 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "name": "BaseBdev2", 00:24:22.163 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:22.163 "is_configured": true, 00:24:22.163 "data_offset": 2048, 00:24:22.163 "data_size": 63488 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "name": "BaseBdev3", 00:24:22.163 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:22.163 "is_configured": true, 00:24:22.163 "data_offset": 2048, 00:24:22.163 "data_size": 63488 00:24:22.163 }, 00:24:22.163 { 00:24:22.163 "name": "BaseBdev4", 00:24:22.163 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:22.163 "is_configured": true, 00:24:22.163 "data_offset": 2048, 00:24:22.163 "data_size": 63488 00:24:22.163 } 00:24:22.163 ] 00:24:22.163 } 00:24:22.163 } 00:24:22.163 }' 00:24:22.163 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.163 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:24:22.163 BaseBdev2 00:24:22.163 BaseBdev3 00:24:22.163 BaseBdev4' 00:24:22.163 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:22.163 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:22.163 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:22.421 "name": "NewBaseBdev", 00:24:22.421 "aliases": [ 00:24:22.421 "1915a7cd-1357-11ef-8e8f-9dd684e56d79" 00:24:22.421 ], 00:24:22.421 "product_name": "Malloc disk", 00:24:22.421 "block_size": 512, 00:24:22.421 "num_blocks": 65536, 00:24:22.421 "uuid": "1915a7cd-1357-11ef-8e8f-9dd684e56d79", 00:24:22.421 "assigned_rate_limits": { 00:24:22.421 "rw_ios_per_sec": 0, 00:24:22.421 "rw_mbytes_per_sec": 0, 00:24:22.421 "r_mbytes_per_sec": 0, 00:24:22.421 "w_mbytes_per_sec": 0 00:24:22.421 }, 00:24:22.421 "claimed": true, 00:24:22.421 "claim_type": "exclusive_write", 00:24:22.421 "zoned": false, 00:24:22.421 "supported_io_types": { 00:24:22.421 "read": true, 00:24:22.421 "write": true, 00:24:22.421 "unmap": true, 00:24:22.421 "write_zeroes": true, 00:24:22.421 "flush": true, 00:24:22.421 "reset": true, 00:24:22.421 "compare": false, 00:24:22.421 "compare_and_write": false, 00:24:22.421 "abort": true, 00:24:22.421 "nvme_admin": false, 00:24:22.421 "nvme_io": false 00:24:22.421 }, 00:24:22.421 "memory_domains": [ 00:24:22.421 { 00:24:22.421 "dma_device_id": "system", 00:24:22.421 "dma_device_type": 1 00:24:22.421 }, 00:24:22.421 { 00:24:22.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.421 "dma_device_type": 2 00:24:22.421 } 00:24:22.421 ], 00:24:22.421 "driver_specific": {} 00:24:22.421 }' 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:22.421 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:22.422 07:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:22.681 "name": "BaseBdev2", 00:24:22.681 "aliases": [ 00:24:22.681 "168cd56f-1357-11ef-8e8f-9dd684e56d79" 00:24:22.681 ], 00:24:22.681 "product_name": "Malloc disk", 00:24:22.681 "block_size": 512, 00:24:22.681 "num_blocks": 65536, 00:24:22.681 "uuid": "168cd56f-1357-11ef-8e8f-9dd684e56d79", 00:24:22.681 "assigned_rate_limits": { 00:24:22.681 "rw_ios_per_sec": 0, 00:24:22.681 "rw_mbytes_per_sec": 0, 00:24:22.681 "r_mbytes_per_sec": 0, 00:24:22.681 "w_mbytes_per_sec": 0 00:24:22.681 }, 00:24:22.681 "claimed": true, 00:24:22.681 "claim_type": "exclusive_write", 00:24:22.681 "zoned": false, 00:24:22.681 "supported_io_types": { 00:24:22.681 "read": true, 00:24:22.681 "write": true, 00:24:22.681 "unmap": true, 00:24:22.681 "write_zeroes": true, 00:24:22.681 "flush": true, 00:24:22.681 "reset": true, 00:24:22.681 "compare": false, 00:24:22.681 "compare_and_write": false, 00:24:22.681 "abort": true, 00:24:22.681 "nvme_admin": false, 00:24:22.681 "nvme_io": false 00:24:22.681 }, 00:24:22.681 "memory_domains": [ 00:24:22.681 { 00:24:22.681 "dma_device_id": "system", 00:24:22.681 "dma_device_type": 1 00:24:22.681 }, 00:24:22.681 { 00:24:22.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.681 "dma_device_type": 2 00:24:22.681 } 00:24:22.681 ], 00:24:22.681 "driver_specific": {} 00:24:22.681 }' 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:22.681 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:22.941 "name": "BaseBdev3", 00:24:22.941 "aliases": [ 00:24:22.941 "170168f1-1357-11ef-8e8f-9dd684e56d79" 00:24:22.941 ], 00:24:22.941 "product_name": "Malloc disk", 00:24:22.941 "block_size": 512, 00:24:22.941 "num_blocks": 65536, 00:24:22.941 "uuid": "170168f1-1357-11ef-8e8f-9dd684e56d79", 00:24:22.941 "assigned_rate_limits": { 00:24:22.941 "rw_ios_per_sec": 0, 00:24:22.941 "rw_mbytes_per_sec": 0, 00:24:22.941 "r_mbytes_per_sec": 0, 00:24:22.941 "w_mbytes_per_sec": 0 00:24:22.941 }, 00:24:22.941 "claimed": true, 00:24:22.941 "claim_type": "exclusive_write", 00:24:22.941 "zoned": false, 00:24:22.941 "supported_io_types": { 00:24:22.941 "read": true, 00:24:22.941 "write": true, 00:24:22.941 "unmap": true, 00:24:22.941 "write_zeroes": true, 00:24:22.941 "flush": true, 00:24:22.941 "reset": true, 00:24:22.941 "compare": false, 00:24:22.941 "compare_and_write": false, 00:24:22.941 "abort": true, 00:24:22.941 "nvme_admin": false, 00:24:22.941 "nvme_io": false 00:24:22.941 }, 00:24:22.941 "memory_domains": [ 00:24:22.941 { 00:24:22.941 "dma_device_id": "system", 00:24:22.941 "dma_device_type": 1 00:24:22.941 }, 00:24:22.941 { 00:24:22.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.941 "dma_device_type": 2 00:24:22.941 } 00:24:22.941 ], 00:24:22.941 "driver_specific": {} 00:24:22.941 }' 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:22.941 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:23.201 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:23.201 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:23.201 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:23.201 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:23.201 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:23.460 "name": "BaseBdev4", 00:24:23.460 "aliases": [ 00:24:23.460 "1772efc8-1357-11ef-8e8f-9dd684e56d79" 00:24:23.460 ], 00:24:23.460 "product_name": "Malloc disk", 00:24:23.460 "block_size": 512, 00:24:23.460 "num_blocks": 65536, 00:24:23.460 "uuid": "1772efc8-1357-11ef-8e8f-9dd684e56d79", 00:24:23.460 "assigned_rate_limits": { 00:24:23.460 "rw_ios_per_sec": 0, 00:24:23.460 "rw_mbytes_per_sec": 0, 00:24:23.460 "r_mbytes_per_sec": 0, 00:24:23.460 "w_mbytes_per_sec": 0 00:24:23.460 }, 00:24:23.460 "claimed": true, 00:24:23.460 "claim_type": "exclusive_write", 00:24:23.460 "zoned": false, 00:24:23.460 "supported_io_types": { 00:24:23.460 "read": true, 00:24:23.460 "write": true, 00:24:23.460 "unmap": true, 00:24:23.460 "write_zeroes": true, 00:24:23.460 "flush": true, 00:24:23.460 "reset": true, 00:24:23.460 "compare": false, 00:24:23.460 "compare_and_write": false, 00:24:23.460 "abort": true, 00:24:23.460 "nvme_admin": false, 00:24:23.460 "nvme_io": false 00:24:23.460 }, 00:24:23.460 "memory_domains": [ 00:24:23.460 { 00:24:23.460 "dma_device_id": "system", 00:24:23.460 "dma_device_type": 1 00:24:23.460 }, 00:24:23.460 { 00:24:23.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.460 "dma_device_type": 2 00:24:23.460 } 00:24:23.460 ], 00:24:23.460 "driver_specific": {} 00:24:23.460 }' 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:23.460 07:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:23.718 [2024-05-16 07:37:17.119316] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:23.718 [2024-05-16 07:37:17.119337] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:23.718 [2024-05-16 07:37:17.119350] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.718 [2024-05-16 07:37:17.119362] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.718 [2024-05-16 07:37:17.119366] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e9a3f00 name Existed_Raid, state offline 00:24:23.718 07:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 58457 00:24:23.718 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 58457 ']' 00:24:23.718 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 58457 00:24:23.718 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:24:23.718 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 58457 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:24:23.719 killing process with pid 58457 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58457' 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 58457 00:24:23.719 [2024-05-16 07:37:17.150171] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:23.719 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 58457 00:24:23.719 [2024-05-16 07:37:17.168825] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:23.993 07:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:24:23.993 00:24:23.993 real 0m27.633s 00:24:23.993 user 0m50.729s 00:24:23.993 sys 0m3.770s 00:24:23.993 ************************************ 00:24:23.993 END TEST raid_state_function_test_sb 00:24:23.993 ************************************ 00:24:23.993 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:23.993 07:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.993 07:37:17 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:23.993 07:37:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:23.993 07:37:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:23.993 07:37:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:23.993 ************************************ 00:24:23.993 START TEST raid_superblock_test 00:24:23.993 ************************************ 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=59275 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 59275 /var/tmp/spdk-raid.sock 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 59275 ']' 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:23.993 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:23.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:23.994 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:23.994 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:23.994 07:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.994 [2024-05-16 07:37:17.389949] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:24:23.994 [2024-05-16 07:37:17.390131] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:24:24.560 EAL: TSC is not safe to use in SMP mode 00:24:24.560 EAL: TSC is not invariant 00:24:24.560 [2024-05-16 07:37:17.873572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.560 [2024-05-16 07:37:17.957966] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:24:24.560 [2024-05-16 07:37:17.960097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.560 [2024-05-16 07:37:17.960801] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.560 [2024-05-16 07:37:17.960820] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:25.127 malloc1 00:24:25.127 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:25.385 [2024-05-16 07:37:18.887416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:25.385 [2024-05-16 07:37:18.887467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.385 [2024-05-16 07:37:18.887984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2ea780 00:24:25.385 [2024-05-16 07:37:18.888005] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.385 [2024-05-16 07:37:18.888707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.385 [2024-05-16 07:37:18.888738] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:25.385 pt1 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:25.385 07:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:25.644 malloc2 00:24:25.644 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:25.903 [2024-05-16 07:37:19.351418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:25.903 [2024-05-16 07:37:19.351472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.903 [2024-05-16 07:37:19.351498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2eac80 00:24:25.903 [2024-05-16 07:37:19.351506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.903 [2024-05-16 07:37:19.351974] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.903 [2024-05-16 07:37:19.352003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:25.903 pt2 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:25.903 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:26.198 malloc3 00:24:26.198 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:26.457 [2024-05-16 07:37:19.939446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:26.457 [2024-05-16 07:37:19.939502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.457 [2024-05-16 07:37:19.939528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2eb180 00:24:26.457 [2024-05-16 07:37:19.939535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.457 [2024-05-16 07:37:19.940067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.457 [2024-05-16 07:37:19.940099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:26.457 pt3 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:26.457 07:37:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:26.716 malloc4 00:24:26.716 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:26.975 [2024-05-16 07:37:20.343498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:26.975 [2024-05-16 07:37:20.343577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.975 [2024-05-16 07:37:20.343611] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2eb680 00:24:26.975 [2024-05-16 07:37:20.343624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.975 [2024-05-16 07:37:20.344289] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.975 [2024-05-16 07:37:20.344339] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:26.975 pt4 00:24:26.975 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:26.975 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:26.975 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:27.234 [2024-05-16 07:37:20.535508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:27.234 [2024-05-16 07:37:20.535967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:27.234 [2024-05-16 07:37:20.535981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:27.234 [2024-05-16 07:37:20.535992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:27.234 [2024-05-16 07:37:20.536042] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a2eb900 00:24:27.234 [2024-05-16 07:37:20.536047] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:27.234 [2024-05-16 07:37:20.536093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a34de20 00:24:27.234 [2024-05-16 07:37:20.536153] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a2eb900 00:24:27.234 [2024-05-16 07:37:20.536157] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a2eb900 00:24:27.234 [2024-05-16 07:37:20.536184] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.234 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.492 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.492 "name": "raid_bdev1", 00:24:27.492 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:27.492 "strip_size_kb": 64, 00:24:27.492 "state": "online", 00:24:27.492 "raid_level": "raid0", 00:24:27.492 "superblock": true, 00:24:27.492 "num_base_bdevs": 4, 00:24:27.492 "num_base_bdevs_discovered": 4, 00:24:27.492 "num_base_bdevs_operational": 4, 00:24:27.492 "base_bdevs_list": [ 00:24:27.492 { 00:24:27.492 "name": "pt1", 00:24:27.492 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:27.492 "is_configured": true, 00:24:27.492 "data_offset": 2048, 00:24:27.492 "data_size": 63488 00:24:27.492 }, 00:24:27.492 { 00:24:27.492 "name": "pt2", 00:24:27.492 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:27.492 "is_configured": true, 00:24:27.492 "data_offset": 2048, 00:24:27.492 "data_size": 63488 00:24:27.492 }, 00:24:27.492 { 00:24:27.492 "name": "pt3", 00:24:27.492 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:27.492 "is_configured": true, 00:24:27.492 "data_offset": 2048, 00:24:27.492 "data_size": 63488 00:24:27.492 }, 00:24:27.492 { 00:24:27.492 "name": "pt4", 00:24:27.492 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:27.492 "is_configured": true, 00:24:27.492 "data_offset": 2048, 00:24:27.492 "data_size": 63488 00:24:27.492 } 00:24:27.492 ] 00:24:27.492 }' 00:24:27.492 07:37:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.492 07:37:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:27.750 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:28.008 [2024-05-16 07:37:21.339515] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.008 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:28.008 "name": "raid_bdev1", 00:24:28.008 "aliases": [ 00:24:28.008 "20cfedd9-1357-11ef-8e8f-9dd684e56d79" 00:24:28.009 ], 00:24:28.009 "product_name": "Raid Volume", 00:24:28.009 "block_size": 512, 00:24:28.009 "num_blocks": 253952, 00:24:28.009 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:28.009 "assigned_rate_limits": { 00:24:28.009 "rw_ios_per_sec": 0, 00:24:28.009 "rw_mbytes_per_sec": 0, 00:24:28.009 "r_mbytes_per_sec": 0, 00:24:28.009 "w_mbytes_per_sec": 0 00:24:28.009 }, 00:24:28.009 "claimed": false, 00:24:28.009 "zoned": false, 00:24:28.009 "supported_io_types": { 00:24:28.009 "read": true, 00:24:28.009 "write": true, 00:24:28.009 "unmap": true, 00:24:28.009 "write_zeroes": true, 00:24:28.009 "flush": true, 00:24:28.009 "reset": true, 00:24:28.009 "compare": false, 00:24:28.009 "compare_and_write": false, 00:24:28.009 "abort": false, 00:24:28.009 "nvme_admin": false, 00:24:28.009 "nvme_io": false 00:24:28.009 }, 00:24:28.009 "memory_domains": [ 00:24:28.009 { 00:24:28.009 "dma_device_id": "system", 00:24:28.009 "dma_device_type": 1 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.009 "dma_device_type": 2 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "system", 00:24:28.009 "dma_device_type": 1 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.009 "dma_device_type": 2 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "system", 00:24:28.009 "dma_device_type": 1 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.009 "dma_device_type": 2 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "system", 00:24:28.009 "dma_device_type": 1 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.009 "dma_device_type": 2 00:24:28.009 } 00:24:28.009 ], 00:24:28.009 "driver_specific": { 00:24:28.009 "raid": { 00:24:28.009 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:28.009 "strip_size_kb": 64, 00:24:28.009 "state": "online", 00:24:28.009 "raid_level": "raid0", 00:24:28.009 "superblock": true, 00:24:28.009 "num_base_bdevs": 4, 00:24:28.009 "num_base_bdevs_discovered": 4, 00:24:28.009 "num_base_bdevs_operational": 4, 00:24:28.009 "base_bdevs_list": [ 00:24:28.009 { 00:24:28.009 "name": "pt1", 00:24:28.009 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:28.009 "is_configured": true, 00:24:28.009 "data_offset": 2048, 00:24:28.009 "data_size": 63488 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "name": "pt2", 00:24:28.009 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:28.009 "is_configured": true, 00:24:28.009 "data_offset": 2048, 00:24:28.009 "data_size": 63488 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "name": "pt3", 00:24:28.009 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:28.009 "is_configured": true, 00:24:28.009 "data_offset": 2048, 00:24:28.009 "data_size": 63488 00:24:28.009 }, 00:24:28.009 { 00:24:28.009 "name": "pt4", 00:24:28.009 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:28.009 "is_configured": true, 00:24:28.009 "data_offset": 2048, 00:24:28.009 "data_size": 63488 00:24:28.009 } 00:24:28.009 ] 00:24:28.009 } 00:24:28.009 } 00:24:28.009 }' 00:24:28.009 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:28.009 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:24:28.009 pt2 00:24:28.009 pt3 00:24:28.009 pt4' 00:24:28.009 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:28.009 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:28.009 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:28.268 "name": "pt1", 00:24:28.268 "aliases": [ 00:24:28.268 "2e55f847-1605-8254-91cc-d6af8ee2f8c8" 00:24:28.268 ], 00:24:28.268 "product_name": "passthru", 00:24:28.268 "block_size": 512, 00:24:28.268 "num_blocks": 65536, 00:24:28.268 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:28.268 "assigned_rate_limits": { 00:24:28.268 "rw_ios_per_sec": 0, 00:24:28.268 "rw_mbytes_per_sec": 0, 00:24:28.268 "r_mbytes_per_sec": 0, 00:24:28.268 "w_mbytes_per_sec": 0 00:24:28.268 }, 00:24:28.268 "claimed": true, 00:24:28.268 "claim_type": "exclusive_write", 00:24:28.268 "zoned": false, 00:24:28.268 "supported_io_types": { 00:24:28.268 "read": true, 00:24:28.268 "write": true, 00:24:28.268 "unmap": true, 00:24:28.268 "write_zeroes": true, 00:24:28.268 "flush": true, 00:24:28.268 "reset": true, 00:24:28.268 "compare": false, 00:24:28.268 "compare_and_write": false, 00:24:28.268 "abort": true, 00:24:28.268 "nvme_admin": false, 00:24:28.268 "nvme_io": false 00:24:28.268 }, 00:24:28.268 "memory_domains": [ 00:24:28.268 { 00:24:28.268 "dma_device_id": "system", 00:24:28.268 "dma_device_type": 1 00:24:28.268 }, 00:24:28.268 { 00:24:28.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.268 "dma_device_type": 2 00:24:28.268 } 00:24:28.268 ], 00:24:28.268 "driver_specific": { 00:24:28.268 "passthru": { 00:24:28.268 "name": "pt1", 00:24:28.268 "base_bdev_name": "malloc1" 00:24:28.268 } 00:24:28.268 } 00:24:28.268 }' 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:28.268 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:28.527 "name": "pt2", 00:24:28.527 "aliases": [ 00:24:28.527 "651f016e-cc67-cc59-aad6-a0c16b5a381b" 00:24:28.527 ], 00:24:28.527 "product_name": "passthru", 00:24:28.527 "block_size": 512, 00:24:28.527 "num_blocks": 65536, 00:24:28.527 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:28.527 "assigned_rate_limits": { 00:24:28.527 "rw_ios_per_sec": 0, 00:24:28.527 "rw_mbytes_per_sec": 0, 00:24:28.527 "r_mbytes_per_sec": 0, 00:24:28.527 "w_mbytes_per_sec": 0 00:24:28.527 }, 00:24:28.527 "claimed": true, 00:24:28.527 "claim_type": "exclusive_write", 00:24:28.527 "zoned": false, 00:24:28.527 "supported_io_types": { 00:24:28.527 "read": true, 00:24:28.527 "write": true, 00:24:28.527 "unmap": true, 00:24:28.527 "write_zeroes": true, 00:24:28.527 "flush": true, 00:24:28.527 "reset": true, 00:24:28.527 "compare": false, 00:24:28.527 "compare_and_write": false, 00:24:28.527 "abort": true, 00:24:28.527 "nvme_admin": false, 00:24:28.527 "nvme_io": false 00:24:28.527 }, 00:24:28.527 "memory_domains": [ 00:24:28.527 { 00:24:28.527 "dma_device_id": "system", 00:24:28.527 "dma_device_type": 1 00:24:28.527 }, 00:24:28.527 { 00:24:28.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.527 "dma_device_type": 2 00:24:28.527 } 00:24:28.527 ], 00:24:28.527 "driver_specific": { 00:24:28.527 "passthru": { 00:24:28.527 "name": "pt2", 00:24:28.527 "base_bdev_name": "malloc2" 00:24:28.527 } 00:24:28.527 } 00:24:28.527 }' 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:28.527 07:37:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:28.786 "name": "pt3", 00:24:28.786 "aliases": [ 00:24:28.786 "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8" 00:24:28.786 ], 00:24:28.786 "product_name": "passthru", 00:24:28.786 "block_size": 512, 00:24:28.786 "num_blocks": 65536, 00:24:28.786 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:28.786 "assigned_rate_limits": { 00:24:28.786 "rw_ios_per_sec": 0, 00:24:28.786 "rw_mbytes_per_sec": 0, 00:24:28.786 "r_mbytes_per_sec": 0, 00:24:28.786 "w_mbytes_per_sec": 0 00:24:28.786 }, 00:24:28.786 "claimed": true, 00:24:28.786 "claim_type": "exclusive_write", 00:24:28.786 "zoned": false, 00:24:28.786 "supported_io_types": { 00:24:28.786 "read": true, 00:24:28.786 "write": true, 00:24:28.786 "unmap": true, 00:24:28.786 "write_zeroes": true, 00:24:28.786 "flush": true, 00:24:28.786 "reset": true, 00:24:28.786 "compare": false, 00:24:28.786 "compare_and_write": false, 00:24:28.786 "abort": true, 00:24:28.786 "nvme_admin": false, 00:24:28.786 "nvme_io": false 00:24:28.786 }, 00:24:28.786 "memory_domains": [ 00:24:28.786 { 00:24:28.786 "dma_device_id": "system", 00:24:28.786 "dma_device_type": 1 00:24:28.786 }, 00:24:28.786 { 00:24:28.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.786 "dma_device_type": 2 00:24:28.786 } 00:24:28.786 ], 00:24:28.786 "driver_specific": { 00:24:28.786 "passthru": { 00:24:28.786 "name": "pt3", 00:24:28.786 "base_bdev_name": "malloc3" 00:24:28.786 } 00:24:28.786 } 00:24:28.786 }' 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:28.786 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:29.045 "name": "pt4", 00:24:29.045 "aliases": [ 00:24:29.045 "6d5e6951-9089-2c54-9fcf-a0d9100949e3" 00:24:29.045 ], 00:24:29.045 "product_name": "passthru", 00:24:29.045 "block_size": 512, 00:24:29.045 "num_blocks": 65536, 00:24:29.045 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:29.045 "assigned_rate_limits": { 00:24:29.045 "rw_ios_per_sec": 0, 00:24:29.045 "rw_mbytes_per_sec": 0, 00:24:29.045 "r_mbytes_per_sec": 0, 00:24:29.045 "w_mbytes_per_sec": 0 00:24:29.045 }, 00:24:29.045 "claimed": true, 00:24:29.045 "claim_type": "exclusive_write", 00:24:29.045 "zoned": false, 00:24:29.045 "supported_io_types": { 00:24:29.045 "read": true, 00:24:29.045 "write": true, 00:24:29.045 "unmap": true, 00:24:29.045 "write_zeroes": true, 00:24:29.045 "flush": true, 00:24:29.045 "reset": true, 00:24:29.045 "compare": false, 00:24:29.045 "compare_and_write": false, 00:24:29.045 "abort": true, 00:24:29.045 "nvme_admin": false, 00:24:29.045 "nvme_io": false 00:24:29.045 }, 00:24:29.045 "memory_domains": [ 00:24:29.045 { 00:24:29.045 "dma_device_id": "system", 00:24:29.045 "dma_device_type": 1 00:24:29.045 }, 00:24:29.045 { 00:24:29.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.045 "dma_device_type": 2 00:24:29.045 } 00:24:29.045 ], 00:24:29.045 "driver_specific": { 00:24:29.045 "passthru": { 00:24:29.045 "name": "pt4", 00:24:29.045 "base_bdev_name": "malloc4" 00:24:29.045 } 00:24:29.045 } 00:24:29.045 }' 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:29.045 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:29.304 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:29.304 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:29.304 [2024-05-16 07:37:22.839514] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.304 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=20cfedd9-1357-11ef-8e8f-9dd684e56d79 00:24:29.304 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 20cfedd9-1357-11ef-8e8f-9dd684e56d79 ']' 00:24:29.304 07:37:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:29.872 [2024-05-16 07:37:23.123492] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:29.872 [2024-05-16 07:37:23.123511] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.872 [2024-05-16 07:37:23.123524] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.872 [2024-05-16 07:37:23.123535] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:29.872 [2024-05-16 07:37:23.123539] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a2eb900 name raid_bdev1, state offline 00:24:29.872 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.872 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:29.872 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:29.872 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:29.872 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:29.872 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:30.130 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:30.130 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:30.389 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:30.389 07:37:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:30.647 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:30.647 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:30.904 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:30.904 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:31.161 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:31.418 [2024-05-16 07:37:24.759514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:31.418 [2024-05-16 07:37:24.759972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:31.418 [2024-05-16 07:37:24.759991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:31.418 [2024-05-16 07:37:24.759998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:31.418 [2024-05-16 07:37:24.760010] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:31.418 [2024-05-16 07:37:24.760047] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:31.418 [2024-05-16 07:37:24.760056] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:31.418 [2024-05-16 07:37:24.760064] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:31.418 [2024-05-16 07:37:24.760072] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.418 [2024-05-16 07:37:24.760077] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a2eb680 name raid_bdev1, state configuring 00:24:31.418 request: 00:24:31.418 { 00:24:31.418 "name": "raid_bdev1", 00:24:31.418 "raid_level": "raid0", 00:24:31.418 "base_bdevs": [ 00:24:31.418 "malloc1", 00:24:31.419 "malloc2", 00:24:31.419 "malloc3", 00:24:31.419 "malloc4" 00:24:31.419 ], 00:24:31.419 "superblock": false, 00:24:31.419 "strip_size_kb": 64, 00:24:31.419 "method": "bdev_raid_create", 00:24:31.419 "req_id": 1 00:24:31.419 } 00:24:31.419 Got JSON-RPC error response 00:24:31.419 response: 00:24:31.419 { 00:24:31.419 "code": -17, 00:24:31.419 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:31.419 } 00:24:31.419 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:31.419 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.419 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.419 07:37:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.419 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.419 07:37:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:31.684 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:31.684 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:31.684 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:31.940 [2024-05-16 07:37:25.303509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:31.940 [2024-05-16 07:37:25.303557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.940 [2024-05-16 07:37:25.303582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2eb180 00:24:31.940 [2024-05-16 07:37:25.303589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.940 [2024-05-16 07:37:25.304061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.940 [2024-05-16 07:37:25.304084] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:31.940 [2024-05-16 07:37:25.304103] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:31.940 [2024-05-16 07:37:25.304113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:31.940 pt1 00:24:31.940 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:31.940 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:31.940 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:31.940 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:31.940 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.941 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.199 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.199 "name": "raid_bdev1", 00:24:32.199 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:32.199 "strip_size_kb": 64, 00:24:32.199 "state": "configuring", 00:24:32.199 "raid_level": "raid0", 00:24:32.199 "superblock": true, 00:24:32.199 "num_base_bdevs": 4, 00:24:32.199 "num_base_bdevs_discovered": 1, 00:24:32.199 "num_base_bdevs_operational": 4, 00:24:32.199 "base_bdevs_list": [ 00:24:32.199 { 00:24:32.199 "name": "pt1", 00:24:32.199 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:32.199 "is_configured": true, 00:24:32.199 "data_offset": 2048, 00:24:32.199 "data_size": 63488 00:24:32.199 }, 00:24:32.199 { 00:24:32.199 "name": null, 00:24:32.199 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:32.199 "is_configured": false, 00:24:32.199 "data_offset": 2048, 00:24:32.199 "data_size": 63488 00:24:32.199 }, 00:24:32.199 { 00:24:32.199 "name": null, 00:24:32.199 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:32.199 "is_configured": false, 00:24:32.199 "data_offset": 2048, 00:24:32.199 "data_size": 63488 00:24:32.199 }, 00:24:32.199 { 00:24:32.199 "name": null, 00:24:32.199 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:32.199 "is_configured": false, 00:24:32.199 "data_offset": 2048, 00:24:32.199 "data_size": 63488 00:24:32.199 } 00:24:32.199 ] 00:24:32.199 }' 00:24:32.199 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.199 07:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.456 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:24:32.456 07:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:32.714 [2024-05-16 07:37:26.111525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:32.714 [2024-05-16 07:37:26.111571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.714 [2024-05-16 07:37:26.111595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2ea780 00:24:32.714 [2024-05-16 07:37:26.111603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.714 [2024-05-16 07:37:26.111680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.714 [2024-05-16 07:37:26.111688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:32.714 [2024-05-16 07:37:26.111704] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:32.714 [2024-05-16 07:37:26.111710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.714 pt2 00:24:32.714 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:32.971 [2024-05-16 07:37:26.315532] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.971 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.261 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.261 "name": "raid_bdev1", 00:24:33.261 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:33.261 "strip_size_kb": 64, 00:24:33.261 "state": "configuring", 00:24:33.261 "raid_level": "raid0", 00:24:33.261 "superblock": true, 00:24:33.261 "num_base_bdevs": 4, 00:24:33.261 "num_base_bdevs_discovered": 1, 00:24:33.261 "num_base_bdevs_operational": 4, 00:24:33.261 "base_bdevs_list": [ 00:24:33.261 { 00:24:33.261 "name": "pt1", 00:24:33.261 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:33.261 "is_configured": true, 00:24:33.261 "data_offset": 2048, 00:24:33.261 "data_size": 63488 00:24:33.261 }, 00:24:33.261 { 00:24:33.261 "name": null, 00:24:33.261 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:33.261 "is_configured": false, 00:24:33.261 "data_offset": 2048, 00:24:33.261 "data_size": 63488 00:24:33.261 }, 00:24:33.261 { 00:24:33.261 "name": null, 00:24:33.261 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:33.261 "is_configured": false, 00:24:33.261 "data_offset": 2048, 00:24:33.262 "data_size": 63488 00:24:33.262 }, 00:24:33.262 { 00:24:33.262 "name": null, 00:24:33.262 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:33.262 "is_configured": false, 00:24:33.262 "data_offset": 2048, 00:24:33.262 "data_size": 63488 00:24:33.262 } 00:24:33.262 ] 00:24:33.262 }' 00:24:33.262 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.262 07:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.520 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:33.520 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:33.520 07:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:33.781 [2024-05-16 07:37:27.131598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:33.781 [2024-05-16 07:37:27.131642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.781 [2024-05-16 07:37:27.131659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2ea780 00:24:33.781 [2024-05-16 07:37:27.131666] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.781 [2024-05-16 07:37:27.131726] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.781 [2024-05-16 07:37:27.131734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:33.781 [2024-05-16 07:37:27.131748] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:33.781 [2024-05-16 07:37:27.131754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:33.781 pt2 00:24:33.781 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:33.781 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:33.781 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:34.041 [2024-05-16 07:37:27.355628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:34.041 [2024-05-16 07:37:27.355668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.041 [2024-05-16 07:37:27.355699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2ebb80 00:24:34.041 [2024-05-16 07:37:27.355706] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.041 [2024-05-16 07:37:27.355767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.041 [2024-05-16 07:37:27.355775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:34.041 [2024-05-16 07:37:27.355790] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:34.041 [2024-05-16 07:37:27.355795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:34.041 pt3 00:24:34.041 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.041 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.041 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:34.301 [2024-05-16 07:37:27.647653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:34.301 [2024-05-16 07:37:27.647698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.301 [2024-05-16 07:37:27.647796] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2eb900 00:24:34.301 [2024-05-16 07:37:27.647804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.301 [2024-05-16 07:37:27.647866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.301 [2024-05-16 07:37:27.647874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:34.301 [2024-05-16 07:37:27.647890] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:34.301 [2024-05-16 07:37:27.647896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:34.301 [2024-05-16 07:37:27.647916] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a2eac80 00:24:34.301 [2024-05-16 07:37:27.647920] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:34.301 [2024-05-16 07:37:27.647938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a34de20 00:24:34.301 [2024-05-16 07:37:27.647979] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a2eac80 00:24:34.301 [2024-05-16 07:37:27.647983] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a2eac80 00:24:34.301 [2024-05-16 07:37:27.648000] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.301 pt4 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.301 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.560 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:34.560 "name": "raid_bdev1", 00:24:34.560 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:34.560 "strip_size_kb": 64, 00:24:34.560 "state": "online", 00:24:34.560 "raid_level": "raid0", 00:24:34.560 "superblock": true, 00:24:34.560 "num_base_bdevs": 4, 00:24:34.560 "num_base_bdevs_discovered": 4, 00:24:34.560 "num_base_bdevs_operational": 4, 00:24:34.560 "base_bdevs_list": [ 00:24:34.560 { 00:24:34.560 "name": "pt1", 00:24:34.560 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:34.560 "is_configured": true, 00:24:34.560 "data_offset": 2048, 00:24:34.560 "data_size": 63488 00:24:34.560 }, 00:24:34.560 { 00:24:34.560 "name": "pt2", 00:24:34.560 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:34.560 "is_configured": true, 00:24:34.560 "data_offset": 2048, 00:24:34.560 "data_size": 63488 00:24:34.560 }, 00:24:34.560 { 00:24:34.560 "name": "pt3", 00:24:34.560 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:34.560 "is_configured": true, 00:24:34.560 "data_offset": 2048, 00:24:34.560 "data_size": 63488 00:24:34.560 }, 00:24:34.560 { 00:24:34.560 "name": "pt4", 00:24:34.560 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:34.560 "is_configured": true, 00:24:34.560 "data_offset": 2048, 00:24:34.560 "data_size": 63488 00:24:34.560 } 00:24:34.560 ] 00:24:34.560 }' 00:24:34.560 07:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:34.560 07:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:34.819 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:35.078 [2024-05-16 07:37:28.423720] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.078 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:35.079 "name": "raid_bdev1", 00:24:35.079 "aliases": [ 00:24:35.079 "20cfedd9-1357-11ef-8e8f-9dd684e56d79" 00:24:35.079 ], 00:24:35.079 "product_name": "Raid Volume", 00:24:35.079 "block_size": 512, 00:24:35.079 "num_blocks": 253952, 00:24:35.079 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:35.079 "assigned_rate_limits": { 00:24:35.079 "rw_ios_per_sec": 0, 00:24:35.079 "rw_mbytes_per_sec": 0, 00:24:35.079 "r_mbytes_per_sec": 0, 00:24:35.079 "w_mbytes_per_sec": 0 00:24:35.079 }, 00:24:35.079 "claimed": false, 00:24:35.079 "zoned": false, 00:24:35.079 "supported_io_types": { 00:24:35.079 "read": true, 00:24:35.079 "write": true, 00:24:35.079 "unmap": true, 00:24:35.079 "write_zeroes": true, 00:24:35.079 "flush": true, 00:24:35.079 "reset": true, 00:24:35.079 "compare": false, 00:24:35.079 "compare_and_write": false, 00:24:35.079 "abort": false, 00:24:35.079 "nvme_admin": false, 00:24:35.079 "nvme_io": false 00:24:35.079 }, 00:24:35.079 "memory_domains": [ 00:24:35.079 { 00:24:35.079 "dma_device_id": "system", 00:24:35.079 "dma_device_type": 1 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.079 "dma_device_type": 2 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "system", 00:24:35.079 "dma_device_type": 1 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.079 "dma_device_type": 2 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "system", 00:24:35.079 "dma_device_type": 1 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.079 "dma_device_type": 2 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "system", 00:24:35.079 "dma_device_type": 1 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.079 "dma_device_type": 2 00:24:35.079 } 00:24:35.079 ], 00:24:35.079 "driver_specific": { 00:24:35.079 "raid": { 00:24:35.079 "uuid": "20cfedd9-1357-11ef-8e8f-9dd684e56d79", 00:24:35.079 "strip_size_kb": 64, 00:24:35.079 "state": "online", 00:24:35.079 "raid_level": "raid0", 00:24:35.079 "superblock": true, 00:24:35.079 "num_base_bdevs": 4, 00:24:35.079 "num_base_bdevs_discovered": 4, 00:24:35.079 "num_base_bdevs_operational": 4, 00:24:35.079 "base_bdevs_list": [ 00:24:35.079 { 00:24:35.079 "name": "pt1", 00:24:35.079 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:35.079 "is_configured": true, 00:24:35.079 "data_offset": 2048, 00:24:35.079 "data_size": 63488 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "name": "pt2", 00:24:35.079 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:35.079 "is_configured": true, 00:24:35.079 "data_offset": 2048, 00:24:35.079 "data_size": 63488 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "name": "pt3", 00:24:35.079 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:35.079 "is_configured": true, 00:24:35.079 "data_offset": 2048, 00:24:35.079 "data_size": 63488 00:24:35.079 }, 00:24:35.079 { 00:24:35.079 "name": "pt4", 00:24:35.079 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:35.079 "is_configured": true, 00:24:35.079 "data_offset": 2048, 00:24:35.079 "data_size": 63488 00:24:35.079 } 00:24:35.079 ] 00:24:35.079 } 00:24:35.079 } 00:24:35.079 }' 00:24:35.079 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:35.079 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:24:35.079 pt2 00:24:35.079 pt3 00:24:35.079 pt4' 00:24:35.079 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:35.079 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:35.079 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:35.339 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:35.339 "name": "pt1", 00:24:35.339 "aliases": [ 00:24:35.339 "2e55f847-1605-8254-91cc-d6af8ee2f8c8" 00:24:35.339 ], 00:24:35.339 "product_name": "passthru", 00:24:35.339 "block_size": 512, 00:24:35.339 "num_blocks": 65536, 00:24:35.339 "uuid": "2e55f847-1605-8254-91cc-d6af8ee2f8c8", 00:24:35.339 "assigned_rate_limits": { 00:24:35.339 "rw_ios_per_sec": 0, 00:24:35.339 "rw_mbytes_per_sec": 0, 00:24:35.339 "r_mbytes_per_sec": 0, 00:24:35.339 "w_mbytes_per_sec": 0 00:24:35.339 }, 00:24:35.339 "claimed": true, 00:24:35.339 "claim_type": "exclusive_write", 00:24:35.339 "zoned": false, 00:24:35.339 "supported_io_types": { 00:24:35.339 "read": true, 00:24:35.339 "write": true, 00:24:35.339 "unmap": true, 00:24:35.339 "write_zeroes": true, 00:24:35.339 "flush": true, 00:24:35.339 "reset": true, 00:24:35.339 "compare": false, 00:24:35.339 "compare_and_write": false, 00:24:35.339 "abort": true, 00:24:35.339 "nvme_admin": false, 00:24:35.339 "nvme_io": false 00:24:35.339 }, 00:24:35.339 "memory_domains": [ 00:24:35.339 { 00:24:35.339 "dma_device_id": "system", 00:24:35.339 "dma_device_type": 1 00:24:35.339 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.340 "dma_device_type": 2 00:24:35.340 } 00:24:35.340 ], 00:24:35.340 "driver_specific": { 00:24:35.340 "passthru": { 00:24:35.340 "name": "pt1", 00:24:35.340 "base_bdev_name": "malloc1" 00:24:35.340 } 00:24:35.340 } 00:24:35.340 }' 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:35.340 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:35.598 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:35.598 "name": "pt2", 00:24:35.598 "aliases": [ 00:24:35.598 "651f016e-cc67-cc59-aad6-a0c16b5a381b" 00:24:35.598 ], 00:24:35.598 "product_name": "passthru", 00:24:35.598 "block_size": 512, 00:24:35.598 "num_blocks": 65536, 00:24:35.598 "uuid": "651f016e-cc67-cc59-aad6-a0c16b5a381b", 00:24:35.598 "assigned_rate_limits": { 00:24:35.598 "rw_ios_per_sec": 0, 00:24:35.598 "rw_mbytes_per_sec": 0, 00:24:35.598 "r_mbytes_per_sec": 0, 00:24:35.598 "w_mbytes_per_sec": 0 00:24:35.598 }, 00:24:35.598 "claimed": true, 00:24:35.598 "claim_type": "exclusive_write", 00:24:35.598 "zoned": false, 00:24:35.598 "supported_io_types": { 00:24:35.598 "read": true, 00:24:35.598 "write": true, 00:24:35.598 "unmap": true, 00:24:35.598 "write_zeroes": true, 00:24:35.598 "flush": true, 00:24:35.598 "reset": true, 00:24:35.598 "compare": false, 00:24:35.598 "compare_and_write": false, 00:24:35.598 "abort": true, 00:24:35.598 "nvme_admin": false, 00:24:35.598 "nvme_io": false 00:24:35.598 }, 00:24:35.598 "memory_domains": [ 00:24:35.598 { 00:24:35.598 "dma_device_id": "system", 00:24:35.598 "dma_device_type": 1 00:24:35.598 }, 00:24:35.598 { 00:24:35.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.598 "dma_device_type": 2 00:24:35.598 } 00:24:35.598 ], 00:24:35.598 "driver_specific": { 00:24:35.598 "passthru": { 00:24:35.598 "name": "pt2", 00:24:35.598 "base_bdev_name": "malloc2" 00:24:35.598 } 00:24:35.598 } 00:24:35.598 }' 00:24:35.598 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.598 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.598 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:35.598 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.598 07:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:35.598 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:35.857 "name": "pt3", 00:24:35.857 "aliases": [ 00:24:35.857 "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8" 00:24:35.857 ], 00:24:35.857 "product_name": "passthru", 00:24:35.857 "block_size": 512, 00:24:35.857 "num_blocks": 65536, 00:24:35.857 "uuid": "c2b0b75c-6eb3-7a59-8431-5a1911aea8c8", 00:24:35.857 "assigned_rate_limits": { 00:24:35.857 "rw_ios_per_sec": 0, 00:24:35.857 "rw_mbytes_per_sec": 0, 00:24:35.857 "r_mbytes_per_sec": 0, 00:24:35.857 "w_mbytes_per_sec": 0 00:24:35.857 }, 00:24:35.857 "claimed": true, 00:24:35.857 "claim_type": "exclusive_write", 00:24:35.857 "zoned": false, 00:24:35.857 "supported_io_types": { 00:24:35.857 "read": true, 00:24:35.857 "write": true, 00:24:35.857 "unmap": true, 00:24:35.857 "write_zeroes": true, 00:24:35.857 "flush": true, 00:24:35.857 "reset": true, 00:24:35.857 "compare": false, 00:24:35.857 "compare_and_write": false, 00:24:35.857 "abort": true, 00:24:35.857 "nvme_admin": false, 00:24:35.857 "nvme_io": false 00:24:35.857 }, 00:24:35.857 "memory_domains": [ 00:24:35.857 { 00:24:35.857 "dma_device_id": "system", 00:24:35.857 "dma_device_type": 1 00:24:35.857 }, 00:24:35.857 { 00:24:35.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.857 "dma_device_type": 2 00:24:35.857 } 00:24:35.857 ], 00:24:35.857 "driver_specific": { 00:24:35.857 "passthru": { 00:24:35.857 "name": "pt3", 00:24:35.857 "base_bdev_name": "malloc3" 00:24:35.857 } 00:24:35.857 } 00:24:35.857 }' 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:35.857 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:36.122 "name": "pt4", 00:24:36.122 "aliases": [ 00:24:36.122 "6d5e6951-9089-2c54-9fcf-a0d9100949e3" 00:24:36.122 ], 00:24:36.122 "product_name": "passthru", 00:24:36.122 "block_size": 512, 00:24:36.122 "num_blocks": 65536, 00:24:36.122 "uuid": "6d5e6951-9089-2c54-9fcf-a0d9100949e3", 00:24:36.122 "assigned_rate_limits": { 00:24:36.122 "rw_ios_per_sec": 0, 00:24:36.122 "rw_mbytes_per_sec": 0, 00:24:36.122 "r_mbytes_per_sec": 0, 00:24:36.122 "w_mbytes_per_sec": 0 00:24:36.122 }, 00:24:36.122 "claimed": true, 00:24:36.122 "claim_type": "exclusive_write", 00:24:36.122 "zoned": false, 00:24:36.122 "supported_io_types": { 00:24:36.122 "read": true, 00:24:36.122 "write": true, 00:24:36.122 "unmap": true, 00:24:36.122 "write_zeroes": true, 00:24:36.122 "flush": true, 00:24:36.122 "reset": true, 00:24:36.122 "compare": false, 00:24:36.122 "compare_and_write": false, 00:24:36.122 "abort": true, 00:24:36.122 "nvme_admin": false, 00:24:36.122 "nvme_io": false 00:24:36.122 }, 00:24:36.122 "memory_domains": [ 00:24:36.122 { 00:24:36.122 "dma_device_id": "system", 00:24:36.122 "dma_device_type": 1 00:24:36.122 }, 00:24:36.122 { 00:24:36.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.122 "dma_device_type": 2 00:24:36.122 } 00:24:36.122 ], 00:24:36.122 "driver_specific": { 00:24:36.122 "passthru": { 00:24:36.122 "name": "pt4", 00:24:36.122 "base_bdev_name": "malloc4" 00:24:36.122 } 00:24:36.122 } 00:24:36.122 }' 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:36.122 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:36.381 [2024-05-16 07:37:29.847797] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 20cfedd9-1357-11ef-8e8f-9dd684e56d79 '!=' 20cfedd9-1357-11ef-8e8f-9dd684e56d79 ']' 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 59275 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 59275 ']' 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 59275 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 59275 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:24:36.381 killing process with pid 59275 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59275' 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 59275 00:24:36.381 [2024-05-16 07:37:29.878073] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:36.381 [2024-05-16 07:37:29.878088] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.381 [2024-05-16 07:37:29.878111] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:36.381 [2024-05-16 07:37:29.878115] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a2eac80 name raid_bdev1, state offline 00:24:36.381 07:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 59275 00:24:36.381 [2024-05-16 07:37:29.897124] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:36.640 07:37:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:36.640 00:24:36.640 real 0m12.685s 00:24:36.640 user 0m22.645s 00:24:36.640 sys 0m1.953s 00:24:36.640 07:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:36.640 ************************************ 00:24:36.640 END TEST raid_superblock_test 00:24:36.640 07:37:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:36.640 ************************************ 00:24:36.640 07:37:30 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:24:36.640 07:37:30 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:24:36.640 07:37:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:36.640 07:37:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:36.640 07:37:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:36.640 ************************************ 00:24:36.640 START TEST raid_state_function_test 00:24:36.640 ************************************ 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:24:36.640 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=59670 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 59670' 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:36.641 Process raid pid: 59670 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 59670 /var/tmp/spdk-raid.sock 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 59670 ']' 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:36.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:36.641 07:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:36.641 [2024-05-16 07:37:30.125088] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:24:36.641 [2024-05-16 07:37:30.125253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:24:37.208 EAL: TSC is not safe to use in SMP mode 00:24:37.208 EAL: TSC is not invariant 00:24:37.208 [2024-05-16 07:37:30.565872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.208 [2024-05-16 07:37:30.647471] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:24:37.208 [2024-05-16 07:37:30.649600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.209 [2024-05-16 07:37:30.650306] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.209 [2024-05-16 07:37:30.650319] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.849 07:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:37.849 07:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:24:37.849 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:38.107 [2024-05-16 07:37:31.492770] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:38.107 [2024-05-16 07:37:31.492817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:38.107 [2024-05-16 07:37:31.492822] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:38.107 [2024-05-16 07:37:31.492829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:38.107 [2024-05-16 07:37:31.492832] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:38.107 [2024-05-16 07:37:31.492838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:38.107 [2024-05-16 07:37:31.492841] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:38.107 [2024-05-16 07:37:31.492847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.107 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.366 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.366 "name": "Existed_Raid", 00:24:38.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.366 "strip_size_kb": 64, 00:24:38.366 "state": "configuring", 00:24:38.366 "raid_level": "concat", 00:24:38.366 "superblock": false, 00:24:38.366 "num_base_bdevs": 4, 00:24:38.366 "num_base_bdevs_discovered": 0, 00:24:38.366 "num_base_bdevs_operational": 4, 00:24:38.366 "base_bdevs_list": [ 00:24:38.366 { 00:24:38.366 "name": "BaseBdev1", 00:24:38.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.366 "is_configured": false, 00:24:38.366 "data_offset": 0, 00:24:38.366 "data_size": 0 00:24:38.366 }, 00:24:38.366 { 00:24:38.366 "name": "BaseBdev2", 00:24:38.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.366 "is_configured": false, 00:24:38.366 "data_offset": 0, 00:24:38.366 "data_size": 0 00:24:38.366 }, 00:24:38.366 { 00:24:38.366 "name": "BaseBdev3", 00:24:38.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.366 "is_configured": false, 00:24:38.366 "data_offset": 0, 00:24:38.366 "data_size": 0 00:24:38.366 }, 00:24:38.366 { 00:24:38.366 "name": "BaseBdev4", 00:24:38.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.366 "is_configured": false, 00:24:38.366 "data_offset": 0, 00:24:38.366 "data_size": 0 00:24:38.366 } 00:24:38.366 ] 00:24:38.366 }' 00:24:38.366 07:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.366 07:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.625 07:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:38.883 [2024-05-16 07:37:32.360822] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:38.883 [2024-05-16 07:37:32.360846] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a83f500 name Existed_Raid, state configuring 00:24:38.883 07:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:39.142 [2024-05-16 07:37:32.564844] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:39.142 [2024-05-16 07:37:32.564881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:39.142 [2024-05-16 07:37:32.564884] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:39.142 [2024-05-16 07:37:32.564891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:39.142 [2024-05-16 07:37:32.564893] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:39.142 [2024-05-16 07:37:32.564899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:39.142 [2024-05-16 07:37:32.564902] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:39.142 [2024-05-16 07:37:32.564924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:39.142 07:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:39.402 [2024-05-16 07:37:32.825830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.402 BaseBdev1 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:39.402 07:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:39.659 07:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:39.918 [ 00:24:39.918 { 00:24:39.918 "name": "BaseBdev1", 00:24:39.918 "aliases": [ 00:24:39.918 "282323b5-1357-11ef-8e8f-9dd684e56d79" 00:24:39.918 ], 00:24:39.918 "product_name": "Malloc disk", 00:24:39.918 "block_size": 512, 00:24:39.918 "num_blocks": 65536, 00:24:39.918 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:39.918 "assigned_rate_limits": { 00:24:39.918 "rw_ios_per_sec": 0, 00:24:39.918 "rw_mbytes_per_sec": 0, 00:24:39.918 "r_mbytes_per_sec": 0, 00:24:39.918 "w_mbytes_per_sec": 0 00:24:39.918 }, 00:24:39.918 "claimed": true, 00:24:39.918 "claim_type": "exclusive_write", 00:24:39.918 "zoned": false, 00:24:39.918 "supported_io_types": { 00:24:39.918 "read": true, 00:24:39.918 "write": true, 00:24:39.918 "unmap": true, 00:24:39.918 "write_zeroes": true, 00:24:39.918 "flush": true, 00:24:39.918 "reset": true, 00:24:39.918 "compare": false, 00:24:39.918 "compare_and_write": false, 00:24:39.918 "abort": true, 00:24:39.918 "nvme_admin": false, 00:24:39.918 "nvme_io": false 00:24:39.918 }, 00:24:39.918 "memory_domains": [ 00:24:39.918 { 00:24:39.918 "dma_device_id": "system", 00:24:39.918 "dma_device_type": 1 00:24:39.918 }, 00:24:39.918 { 00:24:39.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.918 "dma_device_type": 2 00:24:39.918 } 00:24:39.918 ], 00:24:39.918 "driver_specific": {} 00:24:39.918 } 00:24:39.918 ] 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.918 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.919 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.177 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:40.177 "name": "Existed_Raid", 00:24:40.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.177 "strip_size_kb": 64, 00:24:40.177 "state": "configuring", 00:24:40.177 "raid_level": "concat", 00:24:40.177 "superblock": false, 00:24:40.177 "num_base_bdevs": 4, 00:24:40.177 "num_base_bdevs_discovered": 1, 00:24:40.177 "num_base_bdevs_operational": 4, 00:24:40.177 "base_bdevs_list": [ 00:24:40.177 { 00:24:40.177 "name": "BaseBdev1", 00:24:40.177 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:40.177 "is_configured": true, 00:24:40.177 "data_offset": 0, 00:24:40.177 "data_size": 65536 00:24:40.177 }, 00:24:40.177 { 00:24:40.177 "name": "BaseBdev2", 00:24:40.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.177 "is_configured": false, 00:24:40.177 "data_offset": 0, 00:24:40.177 "data_size": 0 00:24:40.177 }, 00:24:40.177 { 00:24:40.177 "name": "BaseBdev3", 00:24:40.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.177 "is_configured": false, 00:24:40.177 "data_offset": 0, 00:24:40.177 "data_size": 0 00:24:40.177 }, 00:24:40.177 { 00:24:40.177 "name": "BaseBdev4", 00:24:40.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.177 "is_configured": false, 00:24:40.177 "data_offset": 0, 00:24:40.177 "data_size": 0 00:24:40.177 } 00:24:40.177 ] 00:24:40.177 }' 00:24:40.177 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:40.177 07:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.436 07:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:40.695 [2024-05-16 07:37:34.000929] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:40.695 [2024-05-16 07:37:34.000955] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a83f500 name Existed_Raid, state configuring 00:24:40.695 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:40.953 [2024-05-16 07:37:34.284970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:40.953 [2024-05-16 07:37:34.285645] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:40.953 [2024-05-16 07:37:34.285685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:40.953 [2024-05-16 07:37:34.285690] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:40.953 [2024-05-16 07:37:34.285698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:40.953 [2024-05-16 07:37:34.285712] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:40.953 [2024-05-16 07:37:34.285719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.953 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.211 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.211 "name": "Existed_Raid", 00:24:41.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.211 "strip_size_kb": 64, 00:24:41.211 "state": "configuring", 00:24:41.211 "raid_level": "concat", 00:24:41.211 "superblock": false, 00:24:41.211 "num_base_bdevs": 4, 00:24:41.211 "num_base_bdevs_discovered": 1, 00:24:41.211 "num_base_bdevs_operational": 4, 00:24:41.211 "base_bdevs_list": [ 00:24:41.211 { 00:24:41.211 "name": "BaseBdev1", 00:24:41.211 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:41.211 "is_configured": true, 00:24:41.211 "data_offset": 0, 00:24:41.211 "data_size": 65536 00:24:41.211 }, 00:24:41.211 { 00:24:41.211 "name": "BaseBdev2", 00:24:41.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.211 "is_configured": false, 00:24:41.211 "data_offset": 0, 00:24:41.211 "data_size": 0 00:24:41.211 }, 00:24:41.211 { 00:24:41.211 "name": "BaseBdev3", 00:24:41.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.211 "is_configured": false, 00:24:41.211 "data_offset": 0, 00:24:41.211 "data_size": 0 00:24:41.211 }, 00:24:41.211 { 00:24:41.211 "name": "BaseBdev4", 00:24:41.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.211 "is_configured": false, 00:24:41.211 "data_offset": 0, 00:24:41.211 "data_size": 0 00:24:41.211 } 00:24:41.211 ] 00:24:41.211 }' 00:24:41.211 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.211 07:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.470 07:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:41.729 [2024-05-16 07:37:35.057137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:41.729 BaseBdev2 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:41.729 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:42.053 [ 00:24:42.053 { 00:24:42.053 "name": "BaseBdev2", 00:24:42.053 "aliases": [ 00:24:42.053 "2977bd1b-1357-11ef-8e8f-9dd684e56d79" 00:24:42.053 ], 00:24:42.053 "product_name": "Malloc disk", 00:24:42.053 "block_size": 512, 00:24:42.053 "num_blocks": 65536, 00:24:42.053 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:42.053 "assigned_rate_limits": { 00:24:42.053 "rw_ios_per_sec": 0, 00:24:42.053 "rw_mbytes_per_sec": 0, 00:24:42.053 "r_mbytes_per_sec": 0, 00:24:42.053 "w_mbytes_per_sec": 0 00:24:42.053 }, 00:24:42.053 "claimed": true, 00:24:42.053 "claim_type": "exclusive_write", 00:24:42.053 "zoned": false, 00:24:42.053 "supported_io_types": { 00:24:42.053 "read": true, 00:24:42.053 "write": true, 00:24:42.053 "unmap": true, 00:24:42.053 "write_zeroes": true, 00:24:42.053 "flush": true, 00:24:42.053 "reset": true, 00:24:42.053 "compare": false, 00:24:42.053 "compare_and_write": false, 00:24:42.053 "abort": true, 00:24:42.053 "nvme_admin": false, 00:24:42.053 "nvme_io": false 00:24:42.053 }, 00:24:42.053 "memory_domains": [ 00:24:42.053 { 00:24:42.053 "dma_device_id": "system", 00:24:42.053 "dma_device_type": 1 00:24:42.053 }, 00:24:42.053 { 00:24:42.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.053 "dma_device_type": 2 00:24:42.053 } 00:24:42.053 ], 00:24:42.053 "driver_specific": {} 00:24:42.053 } 00:24:42.053 ] 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.053 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.338 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:42.338 "name": "Existed_Raid", 00:24:42.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.338 "strip_size_kb": 64, 00:24:42.338 "state": "configuring", 00:24:42.338 "raid_level": "concat", 00:24:42.338 "superblock": false, 00:24:42.338 "num_base_bdevs": 4, 00:24:42.338 "num_base_bdevs_discovered": 2, 00:24:42.338 "num_base_bdevs_operational": 4, 00:24:42.338 "base_bdevs_list": [ 00:24:42.338 { 00:24:42.338 "name": "BaseBdev1", 00:24:42.338 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:42.338 "is_configured": true, 00:24:42.338 "data_offset": 0, 00:24:42.338 "data_size": 65536 00:24:42.338 }, 00:24:42.338 { 00:24:42.338 "name": "BaseBdev2", 00:24:42.338 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:42.338 "is_configured": true, 00:24:42.338 "data_offset": 0, 00:24:42.338 "data_size": 65536 00:24:42.338 }, 00:24:42.338 { 00:24:42.338 "name": "BaseBdev3", 00:24:42.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.338 "is_configured": false, 00:24:42.338 "data_offset": 0, 00:24:42.338 "data_size": 0 00:24:42.338 }, 00:24:42.338 { 00:24:42.338 "name": "BaseBdev4", 00:24:42.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.338 "is_configured": false, 00:24:42.338 "data_offset": 0, 00:24:42.338 "data_size": 0 00:24:42.338 } 00:24:42.338 ] 00:24:42.338 }' 00:24:42.338 07:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:42.338 07:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.597 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:43.164 [2024-05-16 07:37:36.425144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.164 BaseBdev3 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:43.164 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:43.423 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:43.423 [ 00:24:43.423 { 00:24:43.423 "name": "BaseBdev3", 00:24:43.423 "aliases": [ 00:24:43.423 "2a487c66-1357-11ef-8e8f-9dd684e56d79" 00:24:43.423 ], 00:24:43.423 "product_name": "Malloc disk", 00:24:43.423 "block_size": 512, 00:24:43.423 "num_blocks": 65536, 00:24:43.423 "uuid": "2a487c66-1357-11ef-8e8f-9dd684e56d79", 00:24:43.423 "assigned_rate_limits": { 00:24:43.423 "rw_ios_per_sec": 0, 00:24:43.423 "rw_mbytes_per_sec": 0, 00:24:43.423 "r_mbytes_per_sec": 0, 00:24:43.423 "w_mbytes_per_sec": 0 00:24:43.423 }, 00:24:43.423 "claimed": true, 00:24:43.423 "claim_type": "exclusive_write", 00:24:43.423 "zoned": false, 00:24:43.423 "supported_io_types": { 00:24:43.423 "read": true, 00:24:43.423 "write": true, 00:24:43.423 "unmap": true, 00:24:43.423 "write_zeroes": true, 00:24:43.423 "flush": true, 00:24:43.423 "reset": true, 00:24:43.423 "compare": false, 00:24:43.423 "compare_and_write": false, 00:24:43.423 "abort": true, 00:24:43.423 "nvme_admin": false, 00:24:43.423 "nvme_io": false 00:24:43.423 }, 00:24:43.423 "memory_domains": [ 00:24:43.423 { 00:24:43.423 "dma_device_id": "system", 00:24:43.423 "dma_device_type": 1 00:24:43.423 }, 00:24:43.423 { 00:24:43.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.423 "dma_device_type": 2 00:24:43.423 } 00:24:43.423 ], 00:24:43.423 "driver_specific": {} 00:24:43.423 } 00:24:43.423 ] 00:24:43.423 07:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:43.423 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:43.423 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:43.423 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.424 07:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.683 07:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.683 "name": "Existed_Raid", 00:24:43.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.683 "strip_size_kb": 64, 00:24:43.683 "state": "configuring", 00:24:43.683 "raid_level": "concat", 00:24:43.683 "superblock": false, 00:24:43.683 "num_base_bdevs": 4, 00:24:43.683 "num_base_bdevs_discovered": 3, 00:24:43.683 "num_base_bdevs_operational": 4, 00:24:43.683 "base_bdevs_list": [ 00:24:43.683 { 00:24:43.683 "name": "BaseBdev1", 00:24:43.683 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:43.683 "is_configured": true, 00:24:43.683 "data_offset": 0, 00:24:43.683 "data_size": 65536 00:24:43.683 }, 00:24:43.683 { 00:24:43.683 "name": "BaseBdev2", 00:24:43.683 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:43.683 "is_configured": true, 00:24:43.683 "data_offset": 0, 00:24:43.683 "data_size": 65536 00:24:43.683 }, 00:24:43.683 { 00:24:43.683 "name": "BaseBdev3", 00:24:43.683 "uuid": "2a487c66-1357-11ef-8e8f-9dd684e56d79", 00:24:43.683 "is_configured": true, 00:24:43.683 "data_offset": 0, 00:24:43.683 "data_size": 65536 00:24:43.683 }, 00:24:43.683 { 00:24:43.683 "name": "BaseBdev4", 00:24:43.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.683 "is_configured": false, 00:24:43.683 "data_offset": 0, 00:24:43.683 "data_size": 0 00:24:43.683 } 00:24:43.683 ] 00:24:43.683 }' 00:24:43.683 07:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.683 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.252 07:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:44.511 [2024-05-16 07:37:37.821205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:44.511 [2024-05-16 07:37:37.821232] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a83fa00 00:24:44.511 [2024-05-16 07:37:37.821235] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:44.511 [2024-05-16 07:37:37.821261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8a2ec0 00:24:44.511 [2024-05-16 07:37:37.821341] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a83fa00 00:24:44.511 [2024-05-16 07:37:37.821345] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a83fa00 00:24:44.511 [2024-05-16 07:37:37.821369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.511 BaseBdev4 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:44.511 07:37:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.781 07:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:45.052 [ 00:24:45.052 { 00:24:45.052 "name": "BaseBdev4", 00:24:45.052 "aliases": [ 00:24:45.052 "2b1d81dd-1357-11ef-8e8f-9dd684e56d79" 00:24:45.052 ], 00:24:45.052 "product_name": "Malloc disk", 00:24:45.052 "block_size": 512, 00:24:45.052 "num_blocks": 65536, 00:24:45.052 "uuid": "2b1d81dd-1357-11ef-8e8f-9dd684e56d79", 00:24:45.052 "assigned_rate_limits": { 00:24:45.052 "rw_ios_per_sec": 0, 00:24:45.052 "rw_mbytes_per_sec": 0, 00:24:45.052 "r_mbytes_per_sec": 0, 00:24:45.052 "w_mbytes_per_sec": 0 00:24:45.052 }, 00:24:45.052 "claimed": true, 00:24:45.052 "claim_type": "exclusive_write", 00:24:45.052 "zoned": false, 00:24:45.053 "supported_io_types": { 00:24:45.053 "read": true, 00:24:45.053 "write": true, 00:24:45.053 "unmap": true, 00:24:45.053 "write_zeroes": true, 00:24:45.053 "flush": true, 00:24:45.053 "reset": true, 00:24:45.053 "compare": false, 00:24:45.053 "compare_and_write": false, 00:24:45.053 "abort": true, 00:24:45.053 "nvme_admin": false, 00:24:45.053 "nvme_io": false 00:24:45.053 }, 00:24:45.053 "memory_domains": [ 00:24:45.053 { 00:24:45.053 "dma_device_id": "system", 00:24:45.053 "dma_device_type": 1 00:24:45.053 }, 00:24:45.053 { 00:24:45.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.053 "dma_device_type": 2 00:24:45.053 } 00:24:45.053 ], 00:24:45.053 "driver_specific": {} 00:24:45.053 } 00:24:45.053 ] 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.053 "name": "Existed_Raid", 00:24:45.053 "uuid": "2b1d8671-1357-11ef-8e8f-9dd684e56d79", 00:24:45.053 "strip_size_kb": 64, 00:24:45.053 "state": "online", 00:24:45.053 "raid_level": "concat", 00:24:45.053 "superblock": false, 00:24:45.053 "num_base_bdevs": 4, 00:24:45.053 "num_base_bdevs_discovered": 4, 00:24:45.053 "num_base_bdevs_operational": 4, 00:24:45.053 "base_bdevs_list": [ 00:24:45.053 { 00:24:45.053 "name": "BaseBdev1", 00:24:45.053 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:45.053 "is_configured": true, 00:24:45.053 "data_offset": 0, 00:24:45.053 "data_size": 65536 00:24:45.053 }, 00:24:45.053 { 00:24:45.053 "name": "BaseBdev2", 00:24:45.053 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:45.053 "is_configured": true, 00:24:45.053 "data_offset": 0, 00:24:45.053 "data_size": 65536 00:24:45.053 }, 00:24:45.053 { 00:24:45.053 "name": "BaseBdev3", 00:24:45.053 "uuid": "2a487c66-1357-11ef-8e8f-9dd684e56d79", 00:24:45.053 "is_configured": true, 00:24:45.053 "data_offset": 0, 00:24:45.053 "data_size": 65536 00:24:45.053 }, 00:24:45.053 { 00:24:45.053 "name": "BaseBdev4", 00:24:45.053 "uuid": "2b1d81dd-1357-11ef-8e8f-9dd684e56d79", 00:24:45.053 "is_configured": true, 00:24:45.053 "data_offset": 0, 00:24:45.053 "data_size": 65536 00:24:45.053 } 00:24:45.053 ] 00:24:45.053 }' 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.053 07:37:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:24:45.621 07:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:45.621 [2024-05-16 07:37:39.177268] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.880 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:24:45.880 "name": "Existed_Raid", 00:24:45.880 "aliases": [ 00:24:45.880 "2b1d8671-1357-11ef-8e8f-9dd684e56d79" 00:24:45.880 ], 00:24:45.880 "product_name": "Raid Volume", 00:24:45.880 "block_size": 512, 00:24:45.880 "num_blocks": 262144, 00:24:45.880 "uuid": "2b1d8671-1357-11ef-8e8f-9dd684e56d79", 00:24:45.880 "assigned_rate_limits": { 00:24:45.880 "rw_ios_per_sec": 0, 00:24:45.880 "rw_mbytes_per_sec": 0, 00:24:45.880 "r_mbytes_per_sec": 0, 00:24:45.880 "w_mbytes_per_sec": 0 00:24:45.880 }, 00:24:45.880 "claimed": false, 00:24:45.880 "zoned": false, 00:24:45.880 "supported_io_types": { 00:24:45.880 "read": true, 00:24:45.880 "write": true, 00:24:45.880 "unmap": true, 00:24:45.880 "write_zeroes": true, 00:24:45.880 "flush": true, 00:24:45.880 "reset": true, 00:24:45.880 "compare": false, 00:24:45.880 "compare_and_write": false, 00:24:45.880 "abort": false, 00:24:45.880 "nvme_admin": false, 00:24:45.880 "nvme_io": false 00:24:45.880 }, 00:24:45.880 "memory_domains": [ 00:24:45.880 { 00:24:45.880 "dma_device_id": "system", 00:24:45.880 "dma_device_type": 1 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.880 "dma_device_type": 2 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "system", 00:24:45.880 "dma_device_type": 1 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.880 "dma_device_type": 2 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "system", 00:24:45.880 "dma_device_type": 1 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.880 "dma_device_type": 2 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "system", 00:24:45.880 "dma_device_type": 1 00:24:45.880 }, 00:24:45.880 { 00:24:45.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.880 "dma_device_type": 2 00:24:45.880 } 00:24:45.880 ], 00:24:45.880 "driver_specific": { 00:24:45.880 "raid": { 00:24:45.880 "uuid": "2b1d8671-1357-11ef-8e8f-9dd684e56d79", 00:24:45.880 "strip_size_kb": 64, 00:24:45.880 "state": "online", 00:24:45.880 "raid_level": "concat", 00:24:45.880 "superblock": false, 00:24:45.880 "num_base_bdevs": 4, 00:24:45.880 "num_base_bdevs_discovered": 4, 00:24:45.881 "num_base_bdevs_operational": 4, 00:24:45.881 "base_bdevs_list": [ 00:24:45.881 { 00:24:45.881 "name": "BaseBdev1", 00:24:45.881 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:45.881 "is_configured": true, 00:24:45.881 "data_offset": 0, 00:24:45.881 "data_size": 65536 00:24:45.881 }, 00:24:45.881 { 00:24:45.881 "name": "BaseBdev2", 00:24:45.881 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:45.881 "is_configured": true, 00:24:45.881 "data_offset": 0, 00:24:45.881 "data_size": 65536 00:24:45.881 }, 00:24:45.881 { 00:24:45.881 "name": "BaseBdev3", 00:24:45.881 "uuid": "2a487c66-1357-11ef-8e8f-9dd684e56d79", 00:24:45.881 "is_configured": true, 00:24:45.881 "data_offset": 0, 00:24:45.881 "data_size": 65536 00:24:45.881 }, 00:24:45.881 { 00:24:45.881 "name": "BaseBdev4", 00:24:45.881 "uuid": "2b1d81dd-1357-11ef-8e8f-9dd684e56d79", 00:24:45.881 "is_configured": true, 00:24:45.881 "data_offset": 0, 00:24:45.881 "data_size": 65536 00:24:45.881 } 00:24:45.881 ] 00:24:45.881 } 00:24:45.881 } 00:24:45.881 }' 00:24:45.881 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:45.881 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:24:45.881 BaseBdev2 00:24:45.881 BaseBdev3 00:24:45.881 BaseBdev4' 00:24:45.881 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:45.881 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:45.881 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:46.140 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:46.140 "name": "BaseBdev1", 00:24:46.140 "aliases": [ 00:24:46.140 "282323b5-1357-11ef-8e8f-9dd684e56d79" 00:24:46.140 ], 00:24:46.140 "product_name": "Malloc disk", 00:24:46.140 "block_size": 512, 00:24:46.140 "num_blocks": 65536, 00:24:46.140 "uuid": "282323b5-1357-11ef-8e8f-9dd684e56d79", 00:24:46.140 "assigned_rate_limits": { 00:24:46.140 "rw_ios_per_sec": 0, 00:24:46.140 "rw_mbytes_per_sec": 0, 00:24:46.140 "r_mbytes_per_sec": 0, 00:24:46.140 "w_mbytes_per_sec": 0 00:24:46.140 }, 00:24:46.140 "claimed": true, 00:24:46.140 "claim_type": "exclusive_write", 00:24:46.140 "zoned": false, 00:24:46.140 "supported_io_types": { 00:24:46.140 "read": true, 00:24:46.140 "write": true, 00:24:46.140 "unmap": true, 00:24:46.140 "write_zeroes": true, 00:24:46.140 "flush": true, 00:24:46.140 "reset": true, 00:24:46.140 "compare": false, 00:24:46.140 "compare_and_write": false, 00:24:46.140 "abort": true, 00:24:46.140 "nvme_admin": false, 00:24:46.140 "nvme_io": false 00:24:46.140 }, 00:24:46.140 "memory_domains": [ 00:24:46.140 { 00:24:46.140 "dma_device_id": "system", 00:24:46.140 "dma_device_type": 1 00:24:46.140 }, 00:24:46.140 { 00:24:46.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.140 "dma_device_type": 2 00:24:46.140 } 00:24:46.140 ], 00:24:46.140 "driver_specific": {} 00:24:46.140 }' 00:24:46.140 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.140 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.140 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:46.140 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.140 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:46.141 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:46.399 "name": "BaseBdev2", 00:24:46.399 "aliases": [ 00:24:46.399 "2977bd1b-1357-11ef-8e8f-9dd684e56d79" 00:24:46.399 ], 00:24:46.399 "product_name": "Malloc disk", 00:24:46.399 "block_size": 512, 00:24:46.399 "num_blocks": 65536, 00:24:46.399 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:46.399 "assigned_rate_limits": { 00:24:46.399 "rw_ios_per_sec": 0, 00:24:46.399 "rw_mbytes_per_sec": 0, 00:24:46.399 "r_mbytes_per_sec": 0, 00:24:46.399 "w_mbytes_per_sec": 0 00:24:46.399 }, 00:24:46.399 "claimed": true, 00:24:46.399 "claim_type": "exclusive_write", 00:24:46.399 "zoned": false, 00:24:46.399 "supported_io_types": { 00:24:46.399 "read": true, 00:24:46.399 "write": true, 00:24:46.399 "unmap": true, 00:24:46.399 "write_zeroes": true, 00:24:46.399 "flush": true, 00:24:46.399 "reset": true, 00:24:46.399 "compare": false, 00:24:46.399 "compare_and_write": false, 00:24:46.399 "abort": true, 00:24:46.399 "nvme_admin": false, 00:24:46.399 "nvme_io": false 00:24:46.399 }, 00:24:46.399 "memory_domains": [ 00:24:46.399 { 00:24:46.399 "dma_device_id": "system", 00:24:46.399 "dma_device_type": 1 00:24:46.399 }, 00:24:46.399 { 00:24:46.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.399 "dma_device_type": 2 00:24:46.399 } 00:24:46.399 ], 00:24:46.399 "driver_specific": {} 00:24:46.399 }' 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:46.399 07:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:46.658 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:46.658 "name": "BaseBdev3", 00:24:46.658 "aliases": [ 00:24:46.658 "2a487c66-1357-11ef-8e8f-9dd684e56d79" 00:24:46.658 ], 00:24:46.658 "product_name": "Malloc disk", 00:24:46.658 "block_size": 512, 00:24:46.658 "num_blocks": 65536, 00:24:46.658 "uuid": "2a487c66-1357-11ef-8e8f-9dd684e56d79", 00:24:46.658 "assigned_rate_limits": { 00:24:46.658 "rw_ios_per_sec": 0, 00:24:46.658 "rw_mbytes_per_sec": 0, 00:24:46.658 "r_mbytes_per_sec": 0, 00:24:46.658 "w_mbytes_per_sec": 0 00:24:46.658 }, 00:24:46.658 "claimed": true, 00:24:46.658 "claim_type": "exclusive_write", 00:24:46.658 "zoned": false, 00:24:46.658 "supported_io_types": { 00:24:46.658 "read": true, 00:24:46.658 "write": true, 00:24:46.658 "unmap": true, 00:24:46.658 "write_zeroes": true, 00:24:46.658 "flush": true, 00:24:46.658 "reset": true, 00:24:46.658 "compare": false, 00:24:46.658 "compare_and_write": false, 00:24:46.658 "abort": true, 00:24:46.658 "nvme_admin": false, 00:24:46.658 "nvme_io": false 00:24:46.658 }, 00:24:46.658 "memory_domains": [ 00:24:46.658 { 00:24:46.658 "dma_device_id": "system", 00:24:46.658 "dma_device_type": 1 00:24:46.658 }, 00:24:46.658 { 00:24:46.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.658 "dma_device_type": 2 00:24:46.658 } 00:24:46.658 ], 00:24:46.658 "driver_specific": {} 00:24:46.658 }' 00:24:46.658 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.658 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:46.658 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:46.916 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:24:47.175 "name": "BaseBdev4", 00:24:47.175 "aliases": [ 00:24:47.175 "2b1d81dd-1357-11ef-8e8f-9dd684e56d79" 00:24:47.175 ], 00:24:47.175 "product_name": "Malloc disk", 00:24:47.175 "block_size": 512, 00:24:47.175 "num_blocks": 65536, 00:24:47.175 "uuid": "2b1d81dd-1357-11ef-8e8f-9dd684e56d79", 00:24:47.175 "assigned_rate_limits": { 00:24:47.175 "rw_ios_per_sec": 0, 00:24:47.175 "rw_mbytes_per_sec": 0, 00:24:47.175 "r_mbytes_per_sec": 0, 00:24:47.175 "w_mbytes_per_sec": 0 00:24:47.175 }, 00:24:47.175 "claimed": true, 00:24:47.175 "claim_type": "exclusive_write", 00:24:47.175 "zoned": false, 00:24:47.175 "supported_io_types": { 00:24:47.175 "read": true, 00:24:47.175 "write": true, 00:24:47.175 "unmap": true, 00:24:47.175 "write_zeroes": true, 00:24:47.175 "flush": true, 00:24:47.175 "reset": true, 00:24:47.175 "compare": false, 00:24:47.175 "compare_and_write": false, 00:24:47.175 "abort": true, 00:24:47.175 "nvme_admin": false, 00:24:47.175 "nvme_io": false 00:24:47.175 }, 00:24:47.175 "memory_domains": [ 00:24:47.175 { 00:24:47.175 "dma_device_id": "system", 00:24:47.175 "dma_device_type": 1 00:24:47.175 }, 00:24:47.175 { 00:24:47.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.175 "dma_device_type": 2 00:24:47.175 } 00:24:47.175 ], 00:24:47.175 "driver_specific": {} 00:24:47.175 }' 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:47.175 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:24:47.176 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:47.176 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:47.176 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:24:47.176 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:24:47.176 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:47.434 [2024-05-16 07:37:40.865309] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:47.434 [2024-05-16 07:37:40.865338] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:47.434 [2024-05-16 07:37:40.865352] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # return 1 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.434 07:37:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.692 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.693 "name": "Existed_Raid", 00:24:47.693 "uuid": "2b1d8671-1357-11ef-8e8f-9dd684e56d79", 00:24:47.693 "strip_size_kb": 64, 00:24:47.693 "state": "offline", 00:24:47.693 "raid_level": "concat", 00:24:47.693 "superblock": false, 00:24:47.693 "num_base_bdevs": 4, 00:24:47.693 "num_base_bdevs_discovered": 3, 00:24:47.693 "num_base_bdevs_operational": 3, 00:24:47.693 "base_bdevs_list": [ 00:24:47.693 { 00:24:47.693 "name": null, 00:24:47.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.693 "is_configured": false, 00:24:47.693 "data_offset": 0, 00:24:47.693 "data_size": 65536 00:24:47.693 }, 00:24:47.693 { 00:24:47.693 "name": "BaseBdev2", 00:24:47.693 "uuid": "2977bd1b-1357-11ef-8e8f-9dd684e56d79", 00:24:47.693 "is_configured": true, 00:24:47.693 "data_offset": 0, 00:24:47.693 "data_size": 65536 00:24:47.693 }, 00:24:47.693 { 00:24:47.693 "name": "BaseBdev3", 00:24:47.693 "uuid": "2a487c66-1357-11ef-8e8f-9dd684e56d79", 00:24:47.693 "is_configured": true, 00:24:47.693 "data_offset": 0, 00:24:47.693 "data_size": 65536 00:24:47.693 }, 00:24:47.693 { 00:24:47.693 "name": "BaseBdev4", 00:24:47.693 "uuid": "2b1d81dd-1357-11ef-8e8f-9dd684e56d79", 00:24:47.693 "is_configured": true, 00:24:47.693 "data_offset": 0, 00:24:47.693 "data_size": 65536 00:24:47.693 } 00:24:47.693 ] 00:24:47.693 }' 00:24:47.693 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.693 07:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.259 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:48.259 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:48.259 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.259 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:48.517 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:48.517 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:48.517 07:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:48.517 [2024-05-16 07:37:42.050268] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:48.517 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:48.517 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:48.517 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.517 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:49.084 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:49.084 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.084 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:49.343 [2024-05-16 07:37:42.695143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:49.343 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:49.343 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:49.343 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:24:49.343 07:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.601 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:24:49.601 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.601 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:49.858 [2024-05-16 07:37:43.272016] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:49.858 [2024-05-16 07:37:43.272047] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a83fa00 name Existed_Raid, state offline 00:24:49.858 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:49.858 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:49.858 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.858 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:24:50.115 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:24:50.115 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:24:50.115 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:24:50.115 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:24:50.115 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:50.115 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:50.374 BaseBdev2 00:24:50.374 07:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:24:50.375 07:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:50.375 07:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:50.375 07:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:50.375 07:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:50.375 07:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:50.375 07:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:50.633 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:50.891 [ 00:24:50.891 { 00:24:50.891 "name": "BaseBdev2", 00:24:50.891 "aliases": [ 00:24:50.891 "2eb7bb45-1357-11ef-8e8f-9dd684e56d79" 00:24:50.891 ], 00:24:50.891 "product_name": "Malloc disk", 00:24:50.891 "block_size": 512, 00:24:50.891 "num_blocks": 65536, 00:24:50.891 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:50.891 "assigned_rate_limits": { 00:24:50.891 "rw_ios_per_sec": 0, 00:24:50.891 "rw_mbytes_per_sec": 0, 00:24:50.891 "r_mbytes_per_sec": 0, 00:24:50.891 "w_mbytes_per_sec": 0 00:24:50.891 }, 00:24:50.891 "claimed": false, 00:24:50.891 "zoned": false, 00:24:50.891 "supported_io_types": { 00:24:50.891 "read": true, 00:24:50.891 "write": true, 00:24:50.891 "unmap": true, 00:24:50.891 "write_zeroes": true, 00:24:50.891 "flush": true, 00:24:50.891 "reset": true, 00:24:50.891 "compare": false, 00:24:50.891 "compare_and_write": false, 00:24:50.891 "abort": true, 00:24:50.891 "nvme_admin": false, 00:24:50.891 "nvme_io": false 00:24:50.891 }, 00:24:50.891 "memory_domains": [ 00:24:50.891 { 00:24:50.891 "dma_device_id": "system", 00:24:50.891 "dma_device_type": 1 00:24:50.891 }, 00:24:50.891 { 00:24:50.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.891 "dma_device_type": 2 00:24:50.891 } 00:24:50.891 ], 00:24:50.891 "driver_specific": {} 00:24:50.891 } 00:24:50.891 ] 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:51.150 BaseBdev3 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:51.150 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:51.408 07:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:51.667 [ 00:24:51.667 { 00:24:51.667 "name": "BaseBdev3", 00:24:51.667 "aliases": [ 00:24:51.667 "2f2b1694-1357-11ef-8e8f-9dd684e56d79" 00:24:51.667 ], 00:24:51.667 "product_name": "Malloc disk", 00:24:51.667 "block_size": 512, 00:24:51.667 "num_blocks": 65536, 00:24:51.667 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:51.667 "assigned_rate_limits": { 00:24:51.667 "rw_ios_per_sec": 0, 00:24:51.667 "rw_mbytes_per_sec": 0, 00:24:51.667 "r_mbytes_per_sec": 0, 00:24:51.667 "w_mbytes_per_sec": 0 00:24:51.667 }, 00:24:51.667 "claimed": false, 00:24:51.667 "zoned": false, 00:24:51.667 "supported_io_types": { 00:24:51.667 "read": true, 00:24:51.667 "write": true, 00:24:51.667 "unmap": true, 00:24:51.667 "write_zeroes": true, 00:24:51.667 "flush": true, 00:24:51.667 "reset": true, 00:24:51.667 "compare": false, 00:24:51.667 "compare_and_write": false, 00:24:51.667 "abort": true, 00:24:51.667 "nvme_admin": false, 00:24:51.667 "nvme_io": false 00:24:51.667 }, 00:24:51.667 "memory_domains": [ 00:24:51.667 { 00:24:51.667 "dma_device_id": "system", 00:24:51.667 "dma_device_type": 1 00:24:51.667 }, 00:24:51.667 { 00:24:51.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.667 "dma_device_type": 2 00:24:51.667 } 00:24:51.667 ], 00:24:51.667 "driver_specific": {} 00:24:51.667 } 00:24:51.667 ] 00:24:51.667 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:51.667 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:24:51.667 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:51.667 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:51.946 BaseBdev4 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:51.946 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:52.205 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:52.463 [ 00:24:52.463 { 00:24:52.463 "name": "BaseBdev4", 00:24:52.463 "aliases": [ 00:24:52.463 "2f91a232-1357-11ef-8e8f-9dd684e56d79" 00:24:52.463 ], 00:24:52.463 "product_name": "Malloc disk", 00:24:52.463 "block_size": 512, 00:24:52.463 "num_blocks": 65536, 00:24:52.463 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:52.463 "assigned_rate_limits": { 00:24:52.463 "rw_ios_per_sec": 0, 00:24:52.463 "rw_mbytes_per_sec": 0, 00:24:52.463 "r_mbytes_per_sec": 0, 00:24:52.463 "w_mbytes_per_sec": 0 00:24:52.463 }, 00:24:52.463 "claimed": false, 00:24:52.463 "zoned": false, 00:24:52.463 "supported_io_types": { 00:24:52.463 "read": true, 00:24:52.463 "write": true, 00:24:52.463 "unmap": true, 00:24:52.463 "write_zeroes": true, 00:24:52.463 "flush": true, 00:24:52.463 "reset": true, 00:24:52.463 "compare": false, 00:24:52.464 "compare_and_write": false, 00:24:52.464 "abort": true, 00:24:52.464 "nvme_admin": false, 00:24:52.464 "nvme_io": false 00:24:52.464 }, 00:24:52.464 "memory_domains": [ 00:24:52.464 { 00:24:52.464 "dma_device_id": "system", 00:24:52.464 "dma_device_type": 1 00:24:52.464 }, 00:24:52.464 { 00:24:52.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.464 "dma_device_type": 2 00:24:52.464 } 00:24:52.464 ], 00:24:52.464 "driver_specific": {} 00:24:52.464 } 00:24:52.464 ] 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:52.464 [2024-05-16 07:37:45.977061] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:52.464 [2024-05-16 07:37:45.977123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:52.464 [2024-05-16 07:37:45.977130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:52.464 [2024-05-16 07:37:45.977580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:52.464 [2024-05-16 07:37:45.977597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.464 07:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.720 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.720 "name": "Existed_Raid", 00:24:52.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.720 "strip_size_kb": 64, 00:24:52.720 "state": "configuring", 00:24:52.720 "raid_level": "concat", 00:24:52.720 "superblock": false, 00:24:52.720 "num_base_bdevs": 4, 00:24:52.720 "num_base_bdevs_discovered": 3, 00:24:52.720 "num_base_bdevs_operational": 4, 00:24:52.720 "base_bdevs_list": [ 00:24:52.720 { 00:24:52.720 "name": "BaseBdev1", 00:24:52.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.720 "is_configured": false, 00:24:52.720 "data_offset": 0, 00:24:52.720 "data_size": 0 00:24:52.720 }, 00:24:52.720 { 00:24:52.720 "name": "BaseBdev2", 00:24:52.720 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:52.720 "is_configured": true, 00:24:52.720 "data_offset": 0, 00:24:52.720 "data_size": 65536 00:24:52.720 }, 00:24:52.720 { 00:24:52.720 "name": "BaseBdev3", 00:24:52.720 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:52.720 "is_configured": true, 00:24:52.720 "data_offset": 0, 00:24:52.720 "data_size": 65536 00:24:52.720 }, 00:24:52.720 { 00:24:52.720 "name": "BaseBdev4", 00:24:52.720 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:52.721 "is_configured": true, 00:24:52.721 "data_offset": 0, 00:24:52.721 "data_size": 65536 00:24:52.721 } 00:24:52.721 ] 00:24:52.721 }' 00:24:52.721 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.721 07:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.978 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:53.545 [2024-05-16 07:37:46.821110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.545 07:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.802 07:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:53.802 "name": "Existed_Raid", 00:24:53.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.802 "strip_size_kb": 64, 00:24:53.802 "state": "configuring", 00:24:53.802 "raid_level": "concat", 00:24:53.802 "superblock": false, 00:24:53.802 "num_base_bdevs": 4, 00:24:53.803 "num_base_bdevs_discovered": 2, 00:24:53.803 "num_base_bdevs_operational": 4, 00:24:53.803 "base_bdevs_list": [ 00:24:53.803 { 00:24:53.803 "name": "BaseBdev1", 00:24:53.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.803 "is_configured": false, 00:24:53.803 "data_offset": 0, 00:24:53.803 "data_size": 0 00:24:53.803 }, 00:24:53.803 { 00:24:53.803 "name": null, 00:24:53.803 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:53.803 "is_configured": false, 00:24:53.803 "data_offset": 0, 00:24:53.803 "data_size": 65536 00:24:53.803 }, 00:24:53.803 { 00:24:53.803 "name": "BaseBdev3", 00:24:53.803 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:53.803 "is_configured": true, 00:24:53.803 "data_offset": 0, 00:24:53.803 "data_size": 65536 00:24:53.803 }, 00:24:53.803 { 00:24:53.803 "name": "BaseBdev4", 00:24:53.803 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:53.803 "is_configured": true, 00:24:53.803 "data_offset": 0, 00:24:53.803 "data_size": 65536 00:24:53.803 } 00:24:53.803 ] 00:24:53.803 }' 00:24:53.803 07:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:53.803 07:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.059 07:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.060 07:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:54.317 07:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:24:54.318 07:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:54.575 [2024-05-16 07:37:47.985275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.575 BaseBdev1 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:54.575 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:54.831 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:55.089 [ 00:24:55.089 { 00:24:55.089 "name": "BaseBdev1", 00:24:55.089 "aliases": [ 00:24:55.089 "312c6afc-1357-11ef-8e8f-9dd684e56d79" 00:24:55.089 ], 00:24:55.089 "product_name": "Malloc disk", 00:24:55.089 "block_size": 512, 00:24:55.089 "num_blocks": 65536, 00:24:55.089 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:24:55.089 "assigned_rate_limits": { 00:24:55.089 "rw_ios_per_sec": 0, 00:24:55.089 "rw_mbytes_per_sec": 0, 00:24:55.089 "r_mbytes_per_sec": 0, 00:24:55.089 "w_mbytes_per_sec": 0 00:24:55.089 }, 00:24:55.089 "claimed": true, 00:24:55.089 "claim_type": "exclusive_write", 00:24:55.089 "zoned": false, 00:24:55.089 "supported_io_types": { 00:24:55.089 "read": true, 00:24:55.089 "write": true, 00:24:55.089 "unmap": true, 00:24:55.089 "write_zeroes": true, 00:24:55.089 "flush": true, 00:24:55.089 "reset": true, 00:24:55.089 "compare": false, 00:24:55.089 "compare_and_write": false, 00:24:55.089 "abort": true, 00:24:55.089 "nvme_admin": false, 00:24:55.089 "nvme_io": false 00:24:55.089 }, 00:24:55.089 "memory_domains": [ 00:24:55.089 { 00:24:55.089 "dma_device_id": "system", 00:24:55.089 "dma_device_type": 1 00:24:55.089 }, 00:24:55.089 { 00:24:55.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.089 "dma_device_type": 2 00:24:55.089 } 00:24:55.089 ], 00:24:55.089 "driver_specific": {} 00:24:55.089 } 00:24:55.089 ] 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.089 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.346 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:55.346 "name": "Existed_Raid", 00:24:55.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.346 "strip_size_kb": 64, 00:24:55.346 "state": "configuring", 00:24:55.346 "raid_level": "concat", 00:24:55.346 "superblock": false, 00:24:55.346 "num_base_bdevs": 4, 00:24:55.346 "num_base_bdevs_discovered": 3, 00:24:55.346 "num_base_bdevs_operational": 4, 00:24:55.346 "base_bdevs_list": [ 00:24:55.346 { 00:24:55.346 "name": "BaseBdev1", 00:24:55.346 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:24:55.346 "is_configured": true, 00:24:55.346 "data_offset": 0, 00:24:55.346 "data_size": 65536 00:24:55.346 }, 00:24:55.346 { 00:24:55.346 "name": null, 00:24:55.346 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:55.346 "is_configured": false, 00:24:55.346 "data_offset": 0, 00:24:55.346 "data_size": 65536 00:24:55.346 }, 00:24:55.347 { 00:24:55.347 "name": "BaseBdev3", 00:24:55.347 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:55.347 "is_configured": true, 00:24:55.347 "data_offset": 0, 00:24:55.347 "data_size": 65536 00:24:55.347 }, 00:24:55.347 { 00:24:55.347 "name": "BaseBdev4", 00:24:55.347 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:55.347 "is_configured": true, 00:24:55.347 "data_offset": 0, 00:24:55.347 "data_size": 65536 00:24:55.347 } 00:24:55.347 ] 00:24:55.347 }' 00:24:55.347 07:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:55.347 07:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.606 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.606 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:55.863 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:55.863 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:56.120 [2024-05-16 07:37:49.493251] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.120 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.378 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.378 "name": "Existed_Raid", 00:24:56.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.378 "strip_size_kb": 64, 00:24:56.378 "state": "configuring", 00:24:56.378 "raid_level": "concat", 00:24:56.378 "superblock": false, 00:24:56.378 "num_base_bdevs": 4, 00:24:56.378 "num_base_bdevs_discovered": 2, 00:24:56.378 "num_base_bdevs_operational": 4, 00:24:56.378 "base_bdevs_list": [ 00:24:56.378 { 00:24:56.378 "name": "BaseBdev1", 00:24:56.378 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:24:56.378 "is_configured": true, 00:24:56.378 "data_offset": 0, 00:24:56.378 "data_size": 65536 00:24:56.378 }, 00:24:56.378 { 00:24:56.379 "name": null, 00:24:56.379 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:56.379 "is_configured": false, 00:24:56.379 "data_offset": 0, 00:24:56.379 "data_size": 65536 00:24:56.379 }, 00:24:56.379 { 00:24:56.379 "name": null, 00:24:56.379 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:56.379 "is_configured": false, 00:24:56.379 "data_offset": 0, 00:24:56.379 "data_size": 65536 00:24:56.379 }, 00:24:56.379 { 00:24:56.379 "name": "BaseBdev4", 00:24:56.379 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:56.379 "is_configured": true, 00:24:56.379 "data_offset": 0, 00:24:56.379 "data_size": 65536 00:24:56.379 } 00:24:56.379 ] 00:24:56.379 }' 00:24:56.379 07:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.379 07:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.944 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.944 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:56.944 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:24:56.944 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:57.203 [2024-05-16 07:37:50.633335] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.203 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.460 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.460 "name": "Existed_Raid", 00:24:57.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.460 "strip_size_kb": 64, 00:24:57.460 "state": "configuring", 00:24:57.460 "raid_level": "concat", 00:24:57.460 "superblock": false, 00:24:57.460 "num_base_bdevs": 4, 00:24:57.460 "num_base_bdevs_discovered": 3, 00:24:57.460 "num_base_bdevs_operational": 4, 00:24:57.460 "base_bdevs_list": [ 00:24:57.460 { 00:24:57.460 "name": "BaseBdev1", 00:24:57.460 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:24:57.460 "is_configured": true, 00:24:57.460 "data_offset": 0, 00:24:57.460 "data_size": 65536 00:24:57.460 }, 00:24:57.460 { 00:24:57.460 "name": null, 00:24:57.460 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:57.460 "is_configured": false, 00:24:57.460 "data_offset": 0, 00:24:57.460 "data_size": 65536 00:24:57.460 }, 00:24:57.460 { 00:24:57.460 "name": "BaseBdev3", 00:24:57.460 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:57.460 "is_configured": true, 00:24:57.460 "data_offset": 0, 00:24:57.460 "data_size": 65536 00:24:57.460 }, 00:24:57.460 { 00:24:57.460 "name": "BaseBdev4", 00:24:57.460 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:57.460 "is_configured": true, 00:24:57.460 "data_offset": 0, 00:24:57.460 "data_size": 65536 00:24:57.460 } 00:24:57.460 ] 00:24:57.460 }' 00:24:57.460 07:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.460 07:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.716 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:57.716 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.974 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:24:57.974 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:58.230 [2024-05-16 07:37:51.741420] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.230 07:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.487 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.487 "name": "Existed_Raid", 00:24:58.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.487 "strip_size_kb": 64, 00:24:58.487 "state": "configuring", 00:24:58.487 "raid_level": "concat", 00:24:58.487 "superblock": false, 00:24:58.487 "num_base_bdevs": 4, 00:24:58.487 "num_base_bdevs_discovered": 2, 00:24:58.487 "num_base_bdevs_operational": 4, 00:24:58.487 "base_bdevs_list": [ 00:24:58.487 { 00:24:58.487 "name": null, 00:24:58.487 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:24:58.487 "is_configured": false, 00:24:58.487 "data_offset": 0, 00:24:58.487 "data_size": 65536 00:24:58.487 }, 00:24:58.487 { 00:24:58.487 "name": null, 00:24:58.487 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:58.487 "is_configured": false, 00:24:58.487 "data_offset": 0, 00:24:58.487 "data_size": 65536 00:24:58.487 }, 00:24:58.487 { 00:24:58.487 "name": "BaseBdev3", 00:24:58.487 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:58.487 "is_configured": true, 00:24:58.487 "data_offset": 0, 00:24:58.487 "data_size": 65536 00:24:58.487 }, 00:24:58.487 { 00:24:58.487 "name": "BaseBdev4", 00:24:58.487 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:58.487 "is_configured": true, 00:24:58.487 "data_offset": 0, 00:24:58.487 "data_size": 65536 00:24:58.487 } 00:24:58.487 ] 00:24:58.487 }' 00:24:58.487 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.487 07:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.050 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.050 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:59.050 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:24:59.050 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:59.308 [2024-05-16 07:37:52.798173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.308 07:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.565 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.566 "name": "Existed_Raid", 00:24:59.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.566 "strip_size_kb": 64, 00:24:59.566 "state": "configuring", 00:24:59.566 "raid_level": "concat", 00:24:59.566 "superblock": false, 00:24:59.566 "num_base_bdevs": 4, 00:24:59.566 "num_base_bdevs_discovered": 3, 00:24:59.566 "num_base_bdevs_operational": 4, 00:24:59.566 "base_bdevs_list": [ 00:24:59.566 { 00:24:59.566 "name": null, 00:24:59.566 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:24:59.566 "is_configured": false, 00:24:59.566 "data_offset": 0, 00:24:59.566 "data_size": 65536 00:24:59.566 }, 00:24:59.566 { 00:24:59.566 "name": "BaseBdev2", 00:24:59.566 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:24:59.566 "is_configured": true, 00:24:59.566 "data_offset": 0, 00:24:59.566 "data_size": 65536 00:24:59.566 }, 00:24:59.566 { 00:24:59.566 "name": "BaseBdev3", 00:24:59.566 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:24:59.566 "is_configured": true, 00:24:59.566 "data_offset": 0, 00:24:59.566 "data_size": 65536 00:24:59.566 }, 00:24:59.566 { 00:24:59.566 "name": "BaseBdev4", 00:24:59.566 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:24:59.566 "is_configured": true, 00:24:59.566 "data_offset": 0, 00:24:59.566 "data_size": 65536 00:24:59.566 } 00:24:59.566 ] 00:24:59.566 }' 00:24:59.566 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.566 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.823 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:59.823 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.081 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:25:00.081 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:00.081 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.341 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 312c6afc-1357-11ef-8e8f-9dd684e56d79 00:25:00.600 [2024-05-16 07:37:53.946294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:00.600 [2024-05-16 07:37:53.946322] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a83ff00 00:25:00.600 [2024-05-16 07:37:53.946326] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:25:00.600 [2024-05-16 07:37:53.946347] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8a2e20 00:25:00.600 [2024-05-16 07:37:53.946403] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a83ff00 00:25:00.600 [2024-05-16 07:37:53.946406] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a83ff00 00:25:00.600 [2024-05-16 07:37:53.946434] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.600 NewBaseBdev 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:00.600 07:37:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:00.858 07:37:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:01.116 [ 00:25:01.116 { 00:25:01.116 "name": "NewBaseBdev", 00:25:01.116 "aliases": [ 00:25:01.116 "312c6afc-1357-11ef-8e8f-9dd684e56d79" 00:25:01.116 ], 00:25:01.116 "product_name": "Malloc disk", 00:25:01.116 "block_size": 512, 00:25:01.116 "num_blocks": 65536, 00:25:01.116 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:25:01.116 "assigned_rate_limits": { 00:25:01.116 "rw_ios_per_sec": 0, 00:25:01.116 "rw_mbytes_per_sec": 0, 00:25:01.116 "r_mbytes_per_sec": 0, 00:25:01.116 "w_mbytes_per_sec": 0 00:25:01.116 }, 00:25:01.116 "claimed": true, 00:25:01.116 "claim_type": "exclusive_write", 00:25:01.116 "zoned": false, 00:25:01.116 "supported_io_types": { 00:25:01.116 "read": true, 00:25:01.116 "write": true, 00:25:01.116 "unmap": true, 00:25:01.116 "write_zeroes": true, 00:25:01.116 "flush": true, 00:25:01.116 "reset": true, 00:25:01.116 "compare": false, 00:25:01.116 "compare_and_write": false, 00:25:01.116 "abort": true, 00:25:01.116 "nvme_admin": false, 00:25:01.116 "nvme_io": false 00:25:01.116 }, 00:25:01.116 "memory_domains": [ 00:25:01.116 { 00:25:01.116 "dma_device_id": "system", 00:25:01.116 "dma_device_type": 1 00:25:01.116 }, 00:25:01.116 { 00:25:01.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.116 "dma_device_type": 2 00:25:01.116 } 00:25:01.116 ], 00:25:01.116 "driver_specific": {} 00:25:01.116 } 00:25:01.116 ] 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.116 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.682 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:01.682 "name": "Existed_Raid", 00:25:01.682 "uuid": "34ba0498-1357-11ef-8e8f-9dd684e56d79", 00:25:01.682 "strip_size_kb": 64, 00:25:01.682 "state": "online", 00:25:01.682 "raid_level": "concat", 00:25:01.682 "superblock": false, 00:25:01.682 "num_base_bdevs": 4, 00:25:01.682 "num_base_bdevs_discovered": 4, 00:25:01.682 "num_base_bdevs_operational": 4, 00:25:01.682 "base_bdevs_list": [ 00:25:01.682 { 00:25:01.682 "name": "NewBaseBdev", 00:25:01.682 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:25:01.682 "is_configured": true, 00:25:01.682 "data_offset": 0, 00:25:01.682 "data_size": 65536 00:25:01.682 }, 00:25:01.682 { 00:25:01.682 "name": "BaseBdev2", 00:25:01.682 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:25:01.682 "is_configured": true, 00:25:01.682 "data_offset": 0, 00:25:01.682 "data_size": 65536 00:25:01.682 }, 00:25:01.682 { 00:25:01.682 "name": "BaseBdev3", 00:25:01.682 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:25:01.682 "is_configured": true, 00:25:01.682 "data_offset": 0, 00:25:01.682 "data_size": 65536 00:25:01.682 }, 00:25:01.682 { 00:25:01.682 "name": "BaseBdev4", 00:25:01.682 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:25:01.682 "is_configured": true, 00:25:01.682 "data_offset": 0, 00:25:01.682 "data_size": 65536 00:25:01.682 } 00:25:01.682 ] 00:25:01.682 }' 00:25:01.682 07:37:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:01.682 07:37:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:01.941 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:02.200 [2024-05-16 07:37:55.606197] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.200 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:02.200 "name": "Existed_Raid", 00:25:02.200 "aliases": [ 00:25:02.200 "34ba0498-1357-11ef-8e8f-9dd684e56d79" 00:25:02.200 ], 00:25:02.200 "product_name": "Raid Volume", 00:25:02.200 "block_size": 512, 00:25:02.200 "num_blocks": 262144, 00:25:02.200 "uuid": "34ba0498-1357-11ef-8e8f-9dd684e56d79", 00:25:02.200 "assigned_rate_limits": { 00:25:02.200 "rw_ios_per_sec": 0, 00:25:02.200 "rw_mbytes_per_sec": 0, 00:25:02.200 "r_mbytes_per_sec": 0, 00:25:02.200 "w_mbytes_per_sec": 0 00:25:02.200 }, 00:25:02.200 "claimed": false, 00:25:02.200 "zoned": false, 00:25:02.200 "supported_io_types": { 00:25:02.200 "read": true, 00:25:02.200 "write": true, 00:25:02.200 "unmap": true, 00:25:02.200 "write_zeroes": true, 00:25:02.200 "flush": true, 00:25:02.200 "reset": true, 00:25:02.200 "compare": false, 00:25:02.200 "compare_and_write": false, 00:25:02.200 "abort": false, 00:25:02.200 "nvme_admin": false, 00:25:02.200 "nvme_io": false 00:25:02.200 }, 00:25:02.200 "memory_domains": [ 00:25:02.200 { 00:25:02.200 "dma_device_id": "system", 00:25:02.200 "dma_device_type": 1 00:25:02.200 }, 00:25:02.200 { 00:25:02.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.200 "dma_device_type": 2 00:25:02.200 }, 00:25:02.200 { 00:25:02.200 "dma_device_id": "system", 00:25:02.200 "dma_device_type": 1 00:25:02.200 }, 00:25:02.200 { 00:25:02.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.200 "dma_device_type": 2 00:25:02.200 }, 00:25:02.200 { 00:25:02.200 "dma_device_id": "system", 00:25:02.200 "dma_device_type": 1 00:25:02.200 }, 00:25:02.200 { 00:25:02.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.200 "dma_device_type": 2 00:25:02.200 }, 00:25:02.200 { 00:25:02.200 "dma_device_id": "system", 00:25:02.200 "dma_device_type": 1 00:25:02.200 }, 00:25:02.201 { 00:25:02.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.201 "dma_device_type": 2 00:25:02.201 } 00:25:02.201 ], 00:25:02.201 "driver_specific": { 00:25:02.201 "raid": { 00:25:02.201 "uuid": "34ba0498-1357-11ef-8e8f-9dd684e56d79", 00:25:02.201 "strip_size_kb": 64, 00:25:02.201 "state": "online", 00:25:02.201 "raid_level": "concat", 00:25:02.201 "superblock": false, 00:25:02.201 "num_base_bdevs": 4, 00:25:02.201 "num_base_bdevs_discovered": 4, 00:25:02.201 "num_base_bdevs_operational": 4, 00:25:02.201 "base_bdevs_list": [ 00:25:02.201 { 00:25:02.201 "name": "NewBaseBdev", 00:25:02.201 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:25:02.201 "is_configured": true, 00:25:02.201 "data_offset": 0, 00:25:02.201 "data_size": 65536 00:25:02.201 }, 00:25:02.201 { 00:25:02.201 "name": "BaseBdev2", 00:25:02.201 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:25:02.201 "is_configured": true, 00:25:02.201 "data_offset": 0, 00:25:02.201 "data_size": 65536 00:25:02.201 }, 00:25:02.201 { 00:25:02.201 "name": "BaseBdev3", 00:25:02.201 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:25:02.201 "is_configured": true, 00:25:02.201 "data_offset": 0, 00:25:02.201 "data_size": 65536 00:25:02.201 }, 00:25:02.201 { 00:25:02.201 "name": "BaseBdev4", 00:25:02.201 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:25:02.201 "is_configured": true, 00:25:02.201 "data_offset": 0, 00:25:02.201 "data_size": 65536 00:25:02.201 } 00:25:02.201 ] 00:25:02.201 } 00:25:02.201 } 00:25:02.201 }' 00:25:02.201 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:02.201 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:25:02.201 BaseBdev2 00:25:02.201 BaseBdev3 00:25:02.201 BaseBdev4' 00:25:02.201 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:02.201 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:02.201 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:02.471 "name": "NewBaseBdev", 00:25:02.471 "aliases": [ 00:25:02.471 "312c6afc-1357-11ef-8e8f-9dd684e56d79" 00:25:02.471 ], 00:25:02.471 "product_name": "Malloc disk", 00:25:02.471 "block_size": 512, 00:25:02.471 "num_blocks": 65536, 00:25:02.471 "uuid": "312c6afc-1357-11ef-8e8f-9dd684e56d79", 00:25:02.471 "assigned_rate_limits": { 00:25:02.471 "rw_ios_per_sec": 0, 00:25:02.471 "rw_mbytes_per_sec": 0, 00:25:02.471 "r_mbytes_per_sec": 0, 00:25:02.471 "w_mbytes_per_sec": 0 00:25:02.471 }, 00:25:02.471 "claimed": true, 00:25:02.471 "claim_type": "exclusive_write", 00:25:02.471 "zoned": false, 00:25:02.471 "supported_io_types": { 00:25:02.471 "read": true, 00:25:02.471 "write": true, 00:25:02.471 "unmap": true, 00:25:02.471 "write_zeroes": true, 00:25:02.471 "flush": true, 00:25:02.471 "reset": true, 00:25:02.471 "compare": false, 00:25:02.471 "compare_and_write": false, 00:25:02.471 "abort": true, 00:25:02.471 "nvme_admin": false, 00:25:02.471 "nvme_io": false 00:25:02.471 }, 00:25:02.471 "memory_domains": [ 00:25:02.471 { 00:25:02.471 "dma_device_id": "system", 00:25:02.471 "dma_device_type": 1 00:25:02.471 }, 00:25:02.471 { 00:25:02.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.471 "dma_device_type": 2 00:25:02.471 } 00:25:02.471 ], 00:25:02.471 "driver_specific": {} 00:25:02.471 }' 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:02.471 07:37:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:02.471 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:02.471 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:02.471 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:02.471 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:02.746 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:02.746 "name": "BaseBdev2", 00:25:02.746 "aliases": [ 00:25:02.746 "2eb7bb45-1357-11ef-8e8f-9dd684e56d79" 00:25:02.746 ], 00:25:02.746 "product_name": "Malloc disk", 00:25:02.746 "block_size": 512, 00:25:02.746 "num_blocks": 65536, 00:25:02.746 "uuid": "2eb7bb45-1357-11ef-8e8f-9dd684e56d79", 00:25:02.746 "assigned_rate_limits": { 00:25:02.746 "rw_ios_per_sec": 0, 00:25:02.746 "rw_mbytes_per_sec": 0, 00:25:02.746 "r_mbytes_per_sec": 0, 00:25:02.746 "w_mbytes_per_sec": 0 00:25:02.746 }, 00:25:02.746 "claimed": true, 00:25:02.746 "claim_type": "exclusive_write", 00:25:02.746 "zoned": false, 00:25:02.746 "supported_io_types": { 00:25:02.746 "read": true, 00:25:02.746 "write": true, 00:25:02.746 "unmap": true, 00:25:02.746 "write_zeroes": true, 00:25:02.746 "flush": true, 00:25:02.746 "reset": true, 00:25:02.746 "compare": false, 00:25:02.746 "compare_and_write": false, 00:25:02.746 "abort": true, 00:25:02.746 "nvme_admin": false, 00:25:02.746 "nvme_io": false 00:25:02.746 }, 00:25:02.746 "memory_domains": [ 00:25:02.746 { 00:25:02.746 "dma_device_id": "system", 00:25:02.746 "dma_device_type": 1 00:25:02.746 }, 00:25:02.746 { 00:25:02.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.746 "dma_device_type": 2 00:25:02.746 } 00:25:02.746 ], 00:25:02.746 "driver_specific": {} 00:25:02.746 }' 00:25:02.746 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:02.746 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:02.746 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:02.746 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:02.747 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:03.005 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:03.264 "name": "BaseBdev3", 00:25:03.264 "aliases": [ 00:25:03.264 "2f2b1694-1357-11ef-8e8f-9dd684e56d79" 00:25:03.264 ], 00:25:03.264 "product_name": "Malloc disk", 00:25:03.264 "block_size": 512, 00:25:03.264 "num_blocks": 65536, 00:25:03.264 "uuid": "2f2b1694-1357-11ef-8e8f-9dd684e56d79", 00:25:03.264 "assigned_rate_limits": { 00:25:03.264 "rw_ios_per_sec": 0, 00:25:03.264 "rw_mbytes_per_sec": 0, 00:25:03.264 "r_mbytes_per_sec": 0, 00:25:03.264 "w_mbytes_per_sec": 0 00:25:03.264 }, 00:25:03.264 "claimed": true, 00:25:03.264 "claim_type": "exclusive_write", 00:25:03.264 "zoned": false, 00:25:03.264 "supported_io_types": { 00:25:03.264 "read": true, 00:25:03.264 "write": true, 00:25:03.264 "unmap": true, 00:25:03.264 "write_zeroes": true, 00:25:03.264 "flush": true, 00:25:03.264 "reset": true, 00:25:03.264 "compare": false, 00:25:03.264 "compare_and_write": false, 00:25:03.264 "abort": true, 00:25:03.264 "nvme_admin": false, 00:25:03.264 "nvme_io": false 00:25:03.264 }, 00:25:03.264 "memory_domains": [ 00:25:03.264 { 00:25:03.264 "dma_device_id": "system", 00:25:03.264 "dma_device_type": 1 00:25:03.264 }, 00:25:03.264 { 00:25:03.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.264 "dma_device_type": 2 00:25:03.264 } 00:25:03.264 ], 00:25:03.264 "driver_specific": {} 00:25:03.264 }' 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:03.264 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:03.523 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:03.523 "name": "BaseBdev4", 00:25:03.523 "aliases": [ 00:25:03.523 "2f91a232-1357-11ef-8e8f-9dd684e56d79" 00:25:03.523 ], 00:25:03.523 "product_name": "Malloc disk", 00:25:03.523 "block_size": 512, 00:25:03.523 "num_blocks": 65536, 00:25:03.523 "uuid": "2f91a232-1357-11ef-8e8f-9dd684e56d79", 00:25:03.523 "assigned_rate_limits": { 00:25:03.523 "rw_ios_per_sec": 0, 00:25:03.523 "rw_mbytes_per_sec": 0, 00:25:03.523 "r_mbytes_per_sec": 0, 00:25:03.523 "w_mbytes_per_sec": 0 00:25:03.523 }, 00:25:03.523 "claimed": true, 00:25:03.523 "claim_type": "exclusive_write", 00:25:03.523 "zoned": false, 00:25:03.523 "supported_io_types": { 00:25:03.523 "read": true, 00:25:03.523 "write": true, 00:25:03.523 "unmap": true, 00:25:03.523 "write_zeroes": true, 00:25:03.523 "flush": true, 00:25:03.524 "reset": true, 00:25:03.524 "compare": false, 00:25:03.524 "compare_and_write": false, 00:25:03.524 "abort": true, 00:25:03.524 "nvme_admin": false, 00:25:03.524 "nvme_io": false 00:25:03.524 }, 00:25:03.524 "memory_domains": [ 00:25:03.524 { 00:25:03.524 "dma_device_id": "system", 00:25:03.524 "dma_device_type": 1 00:25:03.524 }, 00:25:03.524 { 00:25:03.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.524 "dma_device_type": 2 00:25:03.524 } 00:25:03.524 ], 00:25:03.524 "driver_specific": {} 00:25:03.524 }' 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:03.524 07:37:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:03.783 [2024-05-16 07:37:57.122168] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.783 [2024-05-16 07:37:57.122190] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.783 [2024-05-16 07:37:57.122204] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.783 [2024-05-16 07:37:57.122217] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.783 [2024-05-16 07:37:57.122221] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a83ff00 name Existed_Raid, state offline 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 59670 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 59670 ']' 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 59670 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 59670 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:25:03.783 killing process with pid 59670 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59670' 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 59670 00:25:03.783 [2024-05-16 07:37:57.147822] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 59670 00:25:03.783 [2024-05-16 07:37:57.166750] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:25:03.783 00:25:03.783 real 0m27.221s 00:25:03.783 user 0m50.098s 00:25:03.783 sys 0m3.560s 00:25:03.783 ************************************ 00:25:03.783 END TEST raid_state_function_test 00:25:03.783 ************************************ 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:03.783 07:37:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.043 07:37:57 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:25:04.043 07:37:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:04.043 07:37:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:04.043 07:37:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:04.043 ************************************ 00:25:04.043 START TEST raid_state_function_test_sb 00:25:04.043 ************************************ 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=concat 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' concat '!=' raid1 ']' 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size=64 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@233 -- # strip_size_create_arg='-z 64' 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=60489 00:25:04.043 Process raid pid: 60489 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 60489' 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 60489 /var/tmp/spdk-raid.sock 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 60489 ']' 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:04.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:04.043 07:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.043 [2024-05-16 07:37:57.385587] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:25:04.043 [2024-05-16 07:37:57.385860] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:25:04.302 EAL: TSC is not safe to use in SMP mode 00:25:04.302 EAL: TSC is not invariant 00:25:04.302 [2024-05-16 07:37:57.825596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.561 [2024-05-16 07:37:57.907407] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:04.561 [2024-05-16 07:37:57.909488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.561 [2024-05-16 07:37:57.910126] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.561 [2024-05-16 07:37:57.910139] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:05.128 [2024-05-16 07:37:58.624407] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:05.128 [2024-05-16 07:37:58.624486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:05.128 [2024-05-16 07:37:58.624493] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:05.128 [2024-05-16 07:37:58.624509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:05.128 [2024-05-16 07:37:58.624515] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:05.128 [2024-05-16 07:37:58.624526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:05.128 [2024-05-16 07:37:58.624531] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:05.128 [2024-05-16 07:37:58.624542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.128 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.387 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.387 "name": "Existed_Raid", 00:25:05.387 "uuid": "3783d59a-1357-11ef-8e8f-9dd684e56d79", 00:25:05.387 "strip_size_kb": 64, 00:25:05.387 "state": "configuring", 00:25:05.387 "raid_level": "concat", 00:25:05.387 "superblock": true, 00:25:05.387 "num_base_bdevs": 4, 00:25:05.387 "num_base_bdevs_discovered": 0, 00:25:05.387 "num_base_bdevs_operational": 4, 00:25:05.387 "base_bdevs_list": [ 00:25:05.387 { 00:25:05.387 "name": "BaseBdev1", 00:25:05.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.387 "is_configured": false, 00:25:05.387 "data_offset": 0, 00:25:05.387 "data_size": 0 00:25:05.387 }, 00:25:05.387 { 00:25:05.387 "name": "BaseBdev2", 00:25:05.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.387 "is_configured": false, 00:25:05.387 "data_offset": 0, 00:25:05.387 "data_size": 0 00:25:05.387 }, 00:25:05.387 { 00:25:05.387 "name": "BaseBdev3", 00:25:05.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.387 "is_configured": false, 00:25:05.387 "data_offset": 0, 00:25:05.387 "data_size": 0 00:25:05.387 }, 00:25:05.387 { 00:25:05.387 "name": "BaseBdev4", 00:25:05.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.387 "is_configured": false, 00:25:05.387 "data_offset": 0, 00:25:05.387 "data_size": 0 00:25:05.387 } 00:25:05.387 ] 00:25:05.387 }' 00:25:05.387 07:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.387 07:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.646 07:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:05.906 [2024-05-16 07:37:59.312340] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.906 [2024-05-16 07:37:59.312366] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af94500 name Existed_Raid, state configuring 00:25:05.906 07:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:06.168 [2024-05-16 07:37:59.504346] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:06.168 [2024-05-16 07:37:59.504389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:06.168 [2024-05-16 07:37:59.504393] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.168 [2024-05-16 07:37:59.504400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.168 [2024-05-16 07:37:59.504403] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:06.168 [2024-05-16 07:37:59.504409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:06.168 [2024-05-16 07:37:59.504412] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:06.168 [2024-05-16 07:37:59.504419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:06.168 07:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:06.427 [2024-05-16 07:37:59.753259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.427 BaseBdev1 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:06.427 07:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.685 07:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:06.943 [ 00:25:06.943 { 00:25:06.943 "name": "BaseBdev1", 00:25:06.943 "aliases": [ 00:25:06.943 "382ff20f-1357-11ef-8e8f-9dd684e56d79" 00:25:06.943 ], 00:25:06.943 "product_name": "Malloc disk", 00:25:06.943 "block_size": 512, 00:25:06.945 "num_blocks": 65536, 00:25:06.945 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:06.945 "assigned_rate_limits": { 00:25:06.945 "rw_ios_per_sec": 0, 00:25:06.945 "rw_mbytes_per_sec": 0, 00:25:06.945 "r_mbytes_per_sec": 0, 00:25:06.945 "w_mbytes_per_sec": 0 00:25:06.945 }, 00:25:06.945 "claimed": true, 00:25:06.945 "claim_type": "exclusive_write", 00:25:06.945 "zoned": false, 00:25:06.945 "supported_io_types": { 00:25:06.945 "read": true, 00:25:06.945 "write": true, 00:25:06.945 "unmap": true, 00:25:06.945 "write_zeroes": true, 00:25:06.945 "flush": true, 00:25:06.945 "reset": true, 00:25:06.945 "compare": false, 00:25:06.945 "compare_and_write": false, 00:25:06.945 "abort": true, 00:25:06.945 "nvme_admin": false, 00:25:06.945 "nvme_io": false 00:25:06.945 }, 00:25:06.945 "memory_domains": [ 00:25:06.945 { 00:25:06.945 "dma_device_id": "system", 00:25:06.945 "dma_device_type": 1 00:25:06.945 }, 00:25:06.945 { 00:25:06.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.945 "dma_device_type": 2 00:25:06.945 } 00:25:06.945 ], 00:25:06.945 "driver_specific": {} 00:25:06.945 } 00:25:06.945 ] 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.945 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.204 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.204 "name": "Existed_Raid", 00:25:07.204 "uuid": "380a1a5d-1357-11ef-8e8f-9dd684e56d79", 00:25:07.204 "strip_size_kb": 64, 00:25:07.204 "state": "configuring", 00:25:07.204 "raid_level": "concat", 00:25:07.204 "superblock": true, 00:25:07.204 "num_base_bdevs": 4, 00:25:07.204 "num_base_bdevs_discovered": 1, 00:25:07.204 "num_base_bdevs_operational": 4, 00:25:07.204 "base_bdevs_list": [ 00:25:07.204 { 00:25:07.204 "name": "BaseBdev1", 00:25:07.204 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:07.204 "is_configured": true, 00:25:07.204 "data_offset": 2048, 00:25:07.204 "data_size": 63488 00:25:07.204 }, 00:25:07.204 { 00:25:07.204 "name": "BaseBdev2", 00:25:07.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.204 "is_configured": false, 00:25:07.204 "data_offset": 0, 00:25:07.204 "data_size": 0 00:25:07.204 }, 00:25:07.204 { 00:25:07.204 "name": "BaseBdev3", 00:25:07.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.204 "is_configured": false, 00:25:07.204 "data_offset": 0, 00:25:07.204 "data_size": 0 00:25:07.204 }, 00:25:07.204 { 00:25:07.204 "name": "BaseBdev4", 00:25:07.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.204 "is_configured": false, 00:25:07.204 "data_offset": 0, 00:25:07.204 "data_size": 0 00:25:07.204 } 00:25:07.204 ] 00:25:07.204 }' 00:25:07.204 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.204 07:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.464 07:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:07.723 [2024-05-16 07:38:01.100335] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.723 [2024-05-16 07:38:01.100363] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af94500 name Existed_Raid, state configuring 00:25:07.723 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:07.981 [2024-05-16 07:38:01.392355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.981 [2024-05-16 07:38:01.393041] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.981 [2024-05-16 07:38:01.393082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.981 [2024-05-16 07:38:01.393086] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:07.981 [2024-05-16 07:38:01.393094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:07.981 [2024-05-16 07:38:01.393113] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:07.981 [2024-05-16 07:38:01.393121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.981 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.240 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:08.240 "name": "Existed_Raid", 00:25:08.240 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:08.240 "strip_size_kb": 64, 00:25:08.240 "state": "configuring", 00:25:08.240 "raid_level": "concat", 00:25:08.240 "superblock": true, 00:25:08.240 "num_base_bdevs": 4, 00:25:08.240 "num_base_bdevs_discovered": 1, 00:25:08.240 "num_base_bdevs_operational": 4, 00:25:08.240 "base_bdevs_list": [ 00:25:08.240 { 00:25:08.240 "name": "BaseBdev1", 00:25:08.240 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:08.240 "is_configured": true, 00:25:08.240 "data_offset": 2048, 00:25:08.240 "data_size": 63488 00:25:08.240 }, 00:25:08.240 { 00:25:08.240 "name": "BaseBdev2", 00:25:08.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.240 "is_configured": false, 00:25:08.241 "data_offset": 0, 00:25:08.241 "data_size": 0 00:25:08.241 }, 00:25:08.241 { 00:25:08.241 "name": "BaseBdev3", 00:25:08.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.241 "is_configured": false, 00:25:08.241 "data_offset": 0, 00:25:08.241 "data_size": 0 00:25:08.241 }, 00:25:08.241 { 00:25:08.241 "name": "BaseBdev4", 00:25:08.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.241 "is_configured": false, 00:25:08.241 "data_offset": 0, 00:25:08.241 "data_size": 0 00:25:08.241 } 00:25:08.241 ] 00:25:08.241 }' 00:25:08.241 07:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:08.241 07:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.499 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:08.757 [2024-05-16 07:38:02.272441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.757 BaseBdev2 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:08.757 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.017 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:09.277 [ 00:25:09.277 { 00:25:09.277 "name": "BaseBdev2", 00:25:09.277 "aliases": [ 00:25:09.277 "39b07723-1357-11ef-8e8f-9dd684e56d79" 00:25:09.277 ], 00:25:09.277 "product_name": "Malloc disk", 00:25:09.277 "block_size": 512, 00:25:09.277 "num_blocks": 65536, 00:25:09.277 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:09.277 "assigned_rate_limits": { 00:25:09.277 "rw_ios_per_sec": 0, 00:25:09.277 "rw_mbytes_per_sec": 0, 00:25:09.277 "r_mbytes_per_sec": 0, 00:25:09.277 "w_mbytes_per_sec": 0 00:25:09.277 }, 00:25:09.277 "claimed": true, 00:25:09.277 "claim_type": "exclusive_write", 00:25:09.277 "zoned": false, 00:25:09.277 "supported_io_types": { 00:25:09.277 "read": true, 00:25:09.277 "write": true, 00:25:09.277 "unmap": true, 00:25:09.277 "write_zeroes": true, 00:25:09.277 "flush": true, 00:25:09.277 "reset": true, 00:25:09.277 "compare": false, 00:25:09.277 "compare_and_write": false, 00:25:09.277 "abort": true, 00:25:09.277 "nvme_admin": false, 00:25:09.277 "nvme_io": false 00:25:09.277 }, 00:25:09.277 "memory_domains": [ 00:25:09.277 { 00:25:09.277 "dma_device_id": "system", 00:25:09.277 "dma_device_type": 1 00:25:09.277 }, 00:25:09.277 { 00:25:09.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.277 "dma_device_type": 2 00:25:09.277 } 00:25:09.277 ], 00:25:09.277 "driver_specific": {} 00:25:09.277 } 00:25:09.277 ] 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.277 07:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.536 07:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.536 "name": "Existed_Raid", 00:25:09.536 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:09.536 "strip_size_kb": 64, 00:25:09.536 "state": "configuring", 00:25:09.536 "raid_level": "concat", 00:25:09.536 "superblock": true, 00:25:09.536 "num_base_bdevs": 4, 00:25:09.536 "num_base_bdevs_discovered": 2, 00:25:09.536 "num_base_bdevs_operational": 4, 00:25:09.536 "base_bdevs_list": [ 00:25:09.536 { 00:25:09.536 "name": "BaseBdev1", 00:25:09.536 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:09.536 "is_configured": true, 00:25:09.536 "data_offset": 2048, 00:25:09.536 "data_size": 63488 00:25:09.536 }, 00:25:09.536 { 00:25:09.536 "name": "BaseBdev2", 00:25:09.536 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:09.536 "is_configured": true, 00:25:09.536 "data_offset": 2048, 00:25:09.536 "data_size": 63488 00:25:09.536 }, 00:25:09.536 { 00:25:09.536 "name": "BaseBdev3", 00:25:09.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.536 "is_configured": false, 00:25:09.536 "data_offset": 0, 00:25:09.536 "data_size": 0 00:25:09.536 }, 00:25:09.536 { 00:25:09.536 "name": "BaseBdev4", 00:25:09.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.536 "is_configured": false, 00:25:09.536 "data_offset": 0, 00:25:09.536 "data_size": 0 00:25:09.536 } 00:25:09.536 ] 00:25:09.536 }' 00:25:09.536 07:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.536 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.795 07:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:10.054 [2024-05-16 07:38:03.524417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:10.054 BaseBdev3 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:10.054 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:10.312 07:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:10.571 [ 00:25:10.571 { 00:25:10.571 "name": "BaseBdev3", 00:25:10.571 "aliases": [ 00:25:10.571 "3a6f816e-1357-11ef-8e8f-9dd684e56d79" 00:25:10.571 ], 00:25:10.571 "product_name": "Malloc disk", 00:25:10.571 "block_size": 512, 00:25:10.571 "num_blocks": 65536, 00:25:10.571 "uuid": "3a6f816e-1357-11ef-8e8f-9dd684e56d79", 00:25:10.571 "assigned_rate_limits": { 00:25:10.571 "rw_ios_per_sec": 0, 00:25:10.571 "rw_mbytes_per_sec": 0, 00:25:10.571 "r_mbytes_per_sec": 0, 00:25:10.571 "w_mbytes_per_sec": 0 00:25:10.571 }, 00:25:10.571 "claimed": true, 00:25:10.571 "claim_type": "exclusive_write", 00:25:10.571 "zoned": false, 00:25:10.571 "supported_io_types": { 00:25:10.571 "read": true, 00:25:10.571 "write": true, 00:25:10.571 "unmap": true, 00:25:10.571 "write_zeroes": true, 00:25:10.571 "flush": true, 00:25:10.571 "reset": true, 00:25:10.571 "compare": false, 00:25:10.571 "compare_and_write": false, 00:25:10.571 "abort": true, 00:25:10.571 "nvme_admin": false, 00:25:10.571 "nvme_io": false 00:25:10.571 }, 00:25:10.571 "memory_domains": [ 00:25:10.571 { 00:25:10.571 "dma_device_id": "system", 00:25:10.571 "dma_device_type": 1 00:25:10.571 }, 00:25:10.571 { 00:25:10.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.571 "dma_device_type": 2 00:25:10.571 } 00:25:10.571 ], 00:25:10.571 "driver_specific": {} 00:25:10.571 } 00:25:10.571 ] 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:10.831 "name": "Existed_Raid", 00:25:10.831 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:10.831 "strip_size_kb": 64, 00:25:10.831 "state": "configuring", 00:25:10.831 "raid_level": "concat", 00:25:10.831 "superblock": true, 00:25:10.831 "num_base_bdevs": 4, 00:25:10.831 "num_base_bdevs_discovered": 3, 00:25:10.831 "num_base_bdevs_operational": 4, 00:25:10.831 "base_bdevs_list": [ 00:25:10.831 { 00:25:10.831 "name": "BaseBdev1", 00:25:10.831 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:10.831 "is_configured": true, 00:25:10.831 "data_offset": 2048, 00:25:10.831 "data_size": 63488 00:25:10.831 }, 00:25:10.831 { 00:25:10.831 "name": "BaseBdev2", 00:25:10.831 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:10.831 "is_configured": true, 00:25:10.831 "data_offset": 2048, 00:25:10.831 "data_size": 63488 00:25:10.831 }, 00:25:10.831 { 00:25:10.831 "name": "BaseBdev3", 00:25:10.831 "uuid": "3a6f816e-1357-11ef-8e8f-9dd684e56d79", 00:25:10.831 "is_configured": true, 00:25:10.831 "data_offset": 2048, 00:25:10.831 "data_size": 63488 00:25:10.831 }, 00:25:10.831 { 00:25:10.831 "name": "BaseBdev4", 00:25:10.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.831 "is_configured": false, 00:25:10.831 "data_offset": 0, 00:25:10.831 "data_size": 0 00:25:10.831 } 00:25:10.831 ] 00:25:10.831 }' 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:10.831 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:11.398 [2024-05-16 07:38:04.924427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:11.398 [2024-05-16 07:38:04.924502] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82af94a00 00:25:11.398 [2024-05-16 07:38:04.924507] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:11.398 [2024-05-16 07:38:04.924522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aff7ec0 00:25:11.398 [2024-05-16 07:38:04.924560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82af94a00 00:25:11.398 [2024-05-16 07:38:04.924563] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82af94a00 00:25:11.398 [2024-05-16 07:38:04.924577] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.398 BaseBdev4 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:11.398 07:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:11.964 07:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:11.964 [ 00:25:11.964 { 00:25:11.964 "name": "BaseBdev4", 00:25:11.964 "aliases": [ 00:25:11.964 "3b4520e1-1357-11ef-8e8f-9dd684e56d79" 00:25:11.964 ], 00:25:11.964 "product_name": "Malloc disk", 00:25:11.964 "block_size": 512, 00:25:11.964 "num_blocks": 65536, 00:25:11.964 "uuid": "3b4520e1-1357-11ef-8e8f-9dd684e56d79", 00:25:11.964 "assigned_rate_limits": { 00:25:11.964 "rw_ios_per_sec": 0, 00:25:11.964 "rw_mbytes_per_sec": 0, 00:25:11.964 "r_mbytes_per_sec": 0, 00:25:11.964 "w_mbytes_per_sec": 0 00:25:11.964 }, 00:25:11.964 "claimed": true, 00:25:11.965 "claim_type": "exclusive_write", 00:25:11.965 "zoned": false, 00:25:11.965 "supported_io_types": { 00:25:11.965 "read": true, 00:25:11.965 "write": true, 00:25:11.965 "unmap": true, 00:25:11.965 "write_zeroes": true, 00:25:11.965 "flush": true, 00:25:11.965 "reset": true, 00:25:11.965 "compare": false, 00:25:11.965 "compare_and_write": false, 00:25:11.965 "abort": true, 00:25:11.965 "nvme_admin": false, 00:25:11.965 "nvme_io": false 00:25:11.965 }, 00:25:11.965 "memory_domains": [ 00:25:11.965 { 00:25:11.965 "dma_device_id": "system", 00:25:11.965 "dma_device_type": 1 00:25:11.965 }, 00:25:11.965 { 00:25:11.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.965 "dma_device_type": 2 00:25:11.965 } 00:25:11.965 ], 00:25:11.965 "driver_specific": {} 00:25:11.965 } 00:25:11.965 ] 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.965 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.223 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.223 "name": "Existed_Raid", 00:25:12.223 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:12.223 "strip_size_kb": 64, 00:25:12.223 "state": "online", 00:25:12.223 "raid_level": "concat", 00:25:12.223 "superblock": true, 00:25:12.223 "num_base_bdevs": 4, 00:25:12.223 "num_base_bdevs_discovered": 4, 00:25:12.223 "num_base_bdevs_operational": 4, 00:25:12.223 "base_bdevs_list": [ 00:25:12.223 { 00:25:12.223 "name": "BaseBdev1", 00:25:12.223 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 2048, 00:25:12.223 "data_size": 63488 00:25:12.223 }, 00:25:12.223 { 00:25:12.223 "name": "BaseBdev2", 00:25:12.223 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 2048, 00:25:12.223 "data_size": 63488 00:25:12.223 }, 00:25:12.223 { 00:25:12.223 "name": "BaseBdev3", 00:25:12.223 "uuid": "3a6f816e-1357-11ef-8e8f-9dd684e56d79", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 2048, 00:25:12.223 "data_size": 63488 00:25:12.223 }, 00:25:12.223 { 00:25:12.223 "name": "BaseBdev4", 00:25:12.223 "uuid": "3b4520e1-1357-11ef-8e8f-9dd684e56d79", 00:25:12.223 "is_configured": true, 00:25:12.223 "data_offset": 2048, 00:25:12.223 "data_size": 63488 00:25:12.223 } 00:25:12.223 ] 00:25:12.223 }' 00:25:12.223 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.223 07:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:12.481 07:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:12.739 [2024-05-16 07:38:06.148356] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.739 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:12.739 "name": "Existed_Raid", 00:25:12.739 "aliases": [ 00:25:12.739 "392a3095-1357-11ef-8e8f-9dd684e56d79" 00:25:12.739 ], 00:25:12.739 "product_name": "Raid Volume", 00:25:12.739 "block_size": 512, 00:25:12.739 "num_blocks": 253952, 00:25:12.739 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:12.739 "assigned_rate_limits": { 00:25:12.739 "rw_ios_per_sec": 0, 00:25:12.739 "rw_mbytes_per_sec": 0, 00:25:12.739 "r_mbytes_per_sec": 0, 00:25:12.739 "w_mbytes_per_sec": 0 00:25:12.739 }, 00:25:12.739 "claimed": false, 00:25:12.739 "zoned": false, 00:25:12.739 "supported_io_types": { 00:25:12.739 "read": true, 00:25:12.739 "write": true, 00:25:12.739 "unmap": true, 00:25:12.739 "write_zeroes": true, 00:25:12.739 "flush": true, 00:25:12.739 "reset": true, 00:25:12.739 "compare": false, 00:25:12.739 "compare_and_write": false, 00:25:12.739 "abort": false, 00:25:12.739 "nvme_admin": false, 00:25:12.739 "nvme_io": false 00:25:12.739 }, 00:25:12.739 "memory_domains": [ 00:25:12.739 { 00:25:12.739 "dma_device_id": "system", 00:25:12.739 "dma_device_type": 1 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.739 "dma_device_type": 2 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "system", 00:25:12.739 "dma_device_type": 1 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.739 "dma_device_type": 2 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "system", 00:25:12.739 "dma_device_type": 1 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.739 "dma_device_type": 2 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "system", 00:25:12.739 "dma_device_type": 1 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.739 "dma_device_type": 2 00:25:12.739 } 00:25:12.739 ], 00:25:12.739 "driver_specific": { 00:25:12.739 "raid": { 00:25:12.739 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:12.739 "strip_size_kb": 64, 00:25:12.739 "state": "online", 00:25:12.739 "raid_level": "concat", 00:25:12.739 "superblock": true, 00:25:12.739 "num_base_bdevs": 4, 00:25:12.739 "num_base_bdevs_discovered": 4, 00:25:12.739 "num_base_bdevs_operational": 4, 00:25:12.739 "base_bdevs_list": [ 00:25:12.739 { 00:25:12.739 "name": "BaseBdev1", 00:25:12.739 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:12.739 "is_configured": true, 00:25:12.739 "data_offset": 2048, 00:25:12.739 "data_size": 63488 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "name": "BaseBdev2", 00:25:12.739 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:12.739 "is_configured": true, 00:25:12.739 "data_offset": 2048, 00:25:12.739 "data_size": 63488 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "name": "BaseBdev3", 00:25:12.739 "uuid": "3a6f816e-1357-11ef-8e8f-9dd684e56d79", 00:25:12.739 "is_configured": true, 00:25:12.739 "data_offset": 2048, 00:25:12.739 "data_size": 63488 00:25:12.739 }, 00:25:12.739 { 00:25:12.739 "name": "BaseBdev4", 00:25:12.739 "uuid": "3b4520e1-1357-11ef-8e8f-9dd684e56d79", 00:25:12.739 "is_configured": true, 00:25:12.739 "data_offset": 2048, 00:25:12.739 "data_size": 63488 00:25:12.739 } 00:25:12.739 ] 00:25:12.739 } 00:25:12.739 } 00:25:12.739 }' 00:25:12.739 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:12.739 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:25:12.739 BaseBdev2 00:25:12.739 BaseBdev3 00:25:12.739 BaseBdev4' 00:25:12.739 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:12.739 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:12.739 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:13.027 "name": "BaseBdev1", 00:25:13.027 "aliases": [ 00:25:13.027 "382ff20f-1357-11ef-8e8f-9dd684e56d79" 00:25:13.027 ], 00:25:13.027 "product_name": "Malloc disk", 00:25:13.027 "block_size": 512, 00:25:13.027 "num_blocks": 65536, 00:25:13.027 "uuid": "382ff20f-1357-11ef-8e8f-9dd684e56d79", 00:25:13.027 "assigned_rate_limits": { 00:25:13.027 "rw_ios_per_sec": 0, 00:25:13.027 "rw_mbytes_per_sec": 0, 00:25:13.027 "r_mbytes_per_sec": 0, 00:25:13.027 "w_mbytes_per_sec": 0 00:25:13.027 }, 00:25:13.027 "claimed": true, 00:25:13.027 "claim_type": "exclusive_write", 00:25:13.027 "zoned": false, 00:25:13.027 "supported_io_types": { 00:25:13.027 "read": true, 00:25:13.027 "write": true, 00:25:13.027 "unmap": true, 00:25:13.027 "write_zeroes": true, 00:25:13.027 "flush": true, 00:25:13.027 "reset": true, 00:25:13.027 "compare": false, 00:25:13.027 "compare_and_write": false, 00:25:13.027 "abort": true, 00:25:13.027 "nvme_admin": false, 00:25:13.027 "nvme_io": false 00:25:13.027 }, 00:25:13.027 "memory_domains": [ 00:25:13.027 { 00:25:13.027 "dma_device_id": "system", 00:25:13.027 "dma_device_type": 1 00:25:13.027 }, 00:25:13.027 { 00:25:13.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.027 "dma_device_type": 2 00:25:13.027 } 00:25:13.027 ], 00:25:13.027 "driver_specific": {} 00:25:13.027 }' 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:13.027 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:13.028 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:13.028 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:13.287 "name": "BaseBdev2", 00:25:13.287 "aliases": [ 00:25:13.287 "39b07723-1357-11ef-8e8f-9dd684e56d79" 00:25:13.287 ], 00:25:13.287 "product_name": "Malloc disk", 00:25:13.287 "block_size": 512, 00:25:13.287 "num_blocks": 65536, 00:25:13.287 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:13.287 "assigned_rate_limits": { 00:25:13.287 "rw_ios_per_sec": 0, 00:25:13.287 "rw_mbytes_per_sec": 0, 00:25:13.287 "r_mbytes_per_sec": 0, 00:25:13.287 "w_mbytes_per_sec": 0 00:25:13.287 }, 00:25:13.287 "claimed": true, 00:25:13.287 "claim_type": "exclusive_write", 00:25:13.287 "zoned": false, 00:25:13.287 "supported_io_types": { 00:25:13.287 "read": true, 00:25:13.287 "write": true, 00:25:13.287 "unmap": true, 00:25:13.287 "write_zeroes": true, 00:25:13.287 "flush": true, 00:25:13.287 "reset": true, 00:25:13.287 "compare": false, 00:25:13.287 "compare_and_write": false, 00:25:13.287 "abort": true, 00:25:13.287 "nvme_admin": false, 00:25:13.287 "nvme_io": false 00:25:13.287 }, 00:25:13.287 "memory_domains": [ 00:25:13.287 { 00:25:13.287 "dma_device_id": "system", 00:25:13.287 "dma_device_type": 1 00:25:13.287 }, 00:25:13.287 { 00:25:13.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.287 "dma_device_type": 2 00:25:13.287 } 00:25:13.287 ], 00:25:13.287 "driver_specific": {} 00:25:13.287 }' 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:13.287 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:13.545 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:13.545 "name": "BaseBdev3", 00:25:13.545 "aliases": [ 00:25:13.545 "3a6f816e-1357-11ef-8e8f-9dd684e56d79" 00:25:13.545 ], 00:25:13.545 "product_name": "Malloc disk", 00:25:13.545 "block_size": 512, 00:25:13.545 "num_blocks": 65536, 00:25:13.545 "uuid": "3a6f816e-1357-11ef-8e8f-9dd684e56d79", 00:25:13.545 "assigned_rate_limits": { 00:25:13.545 "rw_ios_per_sec": 0, 00:25:13.545 "rw_mbytes_per_sec": 0, 00:25:13.545 "r_mbytes_per_sec": 0, 00:25:13.545 "w_mbytes_per_sec": 0 00:25:13.545 }, 00:25:13.545 "claimed": true, 00:25:13.545 "claim_type": "exclusive_write", 00:25:13.545 "zoned": false, 00:25:13.545 "supported_io_types": { 00:25:13.545 "read": true, 00:25:13.545 "write": true, 00:25:13.545 "unmap": true, 00:25:13.545 "write_zeroes": true, 00:25:13.545 "flush": true, 00:25:13.545 "reset": true, 00:25:13.545 "compare": false, 00:25:13.545 "compare_and_write": false, 00:25:13.545 "abort": true, 00:25:13.545 "nvme_admin": false, 00:25:13.545 "nvme_io": false 00:25:13.545 }, 00:25:13.545 "memory_domains": [ 00:25:13.545 { 00:25:13.545 "dma_device_id": "system", 00:25:13.545 "dma_device_type": 1 00:25:13.545 }, 00:25:13.545 { 00:25:13.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.545 "dma_device_type": 2 00:25:13.545 } 00:25:13.545 ], 00:25:13.545 "driver_specific": {} 00:25:13.545 }' 00:25:13.545 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.545 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.545 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:13.545 07:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:13.545 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:13.854 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:13.854 "name": "BaseBdev4", 00:25:13.854 "aliases": [ 00:25:13.854 "3b4520e1-1357-11ef-8e8f-9dd684e56d79" 00:25:13.854 ], 00:25:13.854 "product_name": "Malloc disk", 00:25:13.854 "block_size": 512, 00:25:13.854 "num_blocks": 65536, 00:25:13.854 "uuid": "3b4520e1-1357-11ef-8e8f-9dd684e56d79", 00:25:13.854 "assigned_rate_limits": { 00:25:13.854 "rw_ios_per_sec": 0, 00:25:13.854 "rw_mbytes_per_sec": 0, 00:25:13.854 "r_mbytes_per_sec": 0, 00:25:13.854 "w_mbytes_per_sec": 0 00:25:13.854 }, 00:25:13.854 "claimed": true, 00:25:13.854 "claim_type": "exclusive_write", 00:25:13.854 "zoned": false, 00:25:13.854 "supported_io_types": { 00:25:13.855 "read": true, 00:25:13.855 "write": true, 00:25:13.855 "unmap": true, 00:25:13.855 "write_zeroes": true, 00:25:13.855 "flush": true, 00:25:13.855 "reset": true, 00:25:13.855 "compare": false, 00:25:13.855 "compare_and_write": false, 00:25:13.855 "abort": true, 00:25:13.855 "nvme_admin": false, 00:25:13.855 "nvme_io": false 00:25:13.855 }, 00:25:13.855 "memory_domains": [ 00:25:13.855 { 00:25:13.855 "dma_device_id": "system", 00:25:13.855 "dma_device_type": 1 00:25:13.855 }, 00:25:13.855 { 00:25:13.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.855 "dma_device_type": 2 00:25:13.855 } 00:25:13.855 ], 00:25:13.855 "driver_specific": {} 00:25:13.855 }' 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:13.855 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:14.114 [2024-05-16 07:38:07.588371] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.114 [2024-05-16 07:38:07.588392] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.114 [2024-05-16 07:38:07.588402] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy concat 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # return 1 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # expected_state=offline 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:14.114 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:14.115 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:14.115 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:14.115 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.115 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.373 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:14.373 "name": "Existed_Raid", 00:25:14.373 "uuid": "392a3095-1357-11ef-8e8f-9dd684e56d79", 00:25:14.373 "strip_size_kb": 64, 00:25:14.373 "state": "offline", 00:25:14.373 "raid_level": "concat", 00:25:14.373 "superblock": true, 00:25:14.373 "num_base_bdevs": 4, 00:25:14.373 "num_base_bdevs_discovered": 3, 00:25:14.373 "num_base_bdevs_operational": 3, 00:25:14.373 "base_bdevs_list": [ 00:25:14.373 { 00:25:14.373 "name": null, 00:25:14.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.373 "is_configured": false, 00:25:14.373 "data_offset": 2048, 00:25:14.373 "data_size": 63488 00:25:14.373 }, 00:25:14.373 { 00:25:14.373 "name": "BaseBdev2", 00:25:14.373 "uuid": "39b07723-1357-11ef-8e8f-9dd684e56d79", 00:25:14.373 "is_configured": true, 00:25:14.373 "data_offset": 2048, 00:25:14.373 "data_size": 63488 00:25:14.373 }, 00:25:14.373 { 00:25:14.373 "name": "BaseBdev3", 00:25:14.373 "uuid": "3a6f816e-1357-11ef-8e8f-9dd684e56d79", 00:25:14.373 "is_configured": true, 00:25:14.373 "data_offset": 2048, 00:25:14.373 "data_size": 63488 00:25:14.373 }, 00:25:14.373 { 00:25:14.373 "name": "BaseBdev4", 00:25:14.373 "uuid": "3b4520e1-1357-11ef-8e8f-9dd684e56d79", 00:25:14.374 "is_configured": true, 00:25:14.374 "data_offset": 2048, 00:25:14.374 "data_size": 63488 00:25:14.374 } 00:25:14.374 ] 00:25:14.374 }' 00:25:14.374 07:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:14.374 07:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.633 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:14.634 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:14.634 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.634 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:14.892 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:14.892 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:14.892 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:15.150 [2024-05-16 07:38:08.593079] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.150 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:15.150 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:15.150 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.150 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:15.409 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:15.409 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.409 07:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:15.667 [2024-05-16 07:38:09.053832] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:15.667 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:15.667 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:15.667 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.667 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:15.924 [2024-05-16 07:38:09.450514] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:15.924 [2024-05-16 07:38:09.450539] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af94a00 name Existed_Raid, state offline 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.924 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:25:16.182 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:25:16.182 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:25:16.182 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:25:16.182 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:25:16.182 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:16.182 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:16.440 BaseBdev2 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:16.440 07:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:16.697 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:16.954 [ 00:25:16.954 { 00:25:16.954 "name": "BaseBdev2", 00:25:16.954 "aliases": [ 00:25:16.954 "3e43936f-1357-11ef-8e8f-9dd684e56d79" 00:25:16.954 ], 00:25:16.954 "product_name": "Malloc disk", 00:25:16.954 "block_size": 512, 00:25:16.954 "num_blocks": 65536, 00:25:16.954 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:16.954 "assigned_rate_limits": { 00:25:16.955 "rw_ios_per_sec": 0, 00:25:16.955 "rw_mbytes_per_sec": 0, 00:25:16.955 "r_mbytes_per_sec": 0, 00:25:16.955 "w_mbytes_per_sec": 0 00:25:16.955 }, 00:25:16.955 "claimed": false, 00:25:16.955 "zoned": false, 00:25:16.955 "supported_io_types": { 00:25:16.955 "read": true, 00:25:16.955 "write": true, 00:25:16.955 "unmap": true, 00:25:16.955 "write_zeroes": true, 00:25:16.955 "flush": true, 00:25:16.955 "reset": true, 00:25:16.955 "compare": false, 00:25:16.955 "compare_and_write": false, 00:25:16.955 "abort": true, 00:25:16.955 "nvme_admin": false, 00:25:16.955 "nvme_io": false 00:25:16.955 }, 00:25:16.955 "memory_domains": [ 00:25:16.955 { 00:25:16.955 "dma_device_id": "system", 00:25:16.955 "dma_device_type": 1 00:25:16.955 }, 00:25:16.955 { 00:25:16.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.955 "dma_device_type": 2 00:25:16.955 } 00:25:16.955 ], 00:25:16.955 "driver_specific": {} 00:25:16.955 } 00:25:16.955 ] 00:25:16.955 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:16.955 07:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:25:16.955 07:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:16.955 07:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:17.212 BaseBdev3 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:17.212 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.469 07:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:17.727 [ 00:25:17.727 { 00:25:17.727 "name": "BaseBdev3", 00:25:17.727 "aliases": [ 00:25:17.727 "3eb5b608-1357-11ef-8e8f-9dd684e56d79" 00:25:17.727 ], 00:25:17.727 "product_name": "Malloc disk", 00:25:17.727 "block_size": 512, 00:25:17.727 "num_blocks": 65536, 00:25:17.727 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:17.727 "assigned_rate_limits": { 00:25:17.727 "rw_ios_per_sec": 0, 00:25:17.727 "rw_mbytes_per_sec": 0, 00:25:17.727 "r_mbytes_per_sec": 0, 00:25:17.727 "w_mbytes_per_sec": 0 00:25:17.727 }, 00:25:17.727 "claimed": false, 00:25:17.727 "zoned": false, 00:25:17.727 "supported_io_types": { 00:25:17.727 "read": true, 00:25:17.727 "write": true, 00:25:17.727 "unmap": true, 00:25:17.727 "write_zeroes": true, 00:25:17.727 "flush": true, 00:25:17.727 "reset": true, 00:25:17.727 "compare": false, 00:25:17.727 "compare_and_write": false, 00:25:17.727 "abort": true, 00:25:17.727 "nvme_admin": false, 00:25:17.727 "nvme_io": false 00:25:17.727 }, 00:25:17.727 "memory_domains": [ 00:25:17.727 { 00:25:17.727 "dma_device_id": "system", 00:25:17.727 "dma_device_type": 1 00:25:17.727 }, 00:25:17.727 { 00:25:17.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.727 "dma_device_type": 2 00:25:17.727 } 00:25:17.727 ], 00:25:17.727 "driver_specific": {} 00:25:17.727 } 00:25:17.727 ] 00:25:17.727 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:17.727 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:25:17.727 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:17.727 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:17.985 BaseBdev4 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:17.985 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:18.243 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:18.501 [ 00:25:18.501 { 00:25:18.501 "name": "BaseBdev4", 00:25:18.501 "aliases": [ 00:25:18.501 "3f1b073c-1357-11ef-8e8f-9dd684e56d79" 00:25:18.501 ], 00:25:18.501 "product_name": "Malloc disk", 00:25:18.501 "block_size": 512, 00:25:18.501 "num_blocks": 65536, 00:25:18.501 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:18.501 "assigned_rate_limits": { 00:25:18.501 "rw_ios_per_sec": 0, 00:25:18.501 "rw_mbytes_per_sec": 0, 00:25:18.501 "r_mbytes_per_sec": 0, 00:25:18.501 "w_mbytes_per_sec": 0 00:25:18.501 }, 00:25:18.501 "claimed": false, 00:25:18.501 "zoned": false, 00:25:18.501 "supported_io_types": { 00:25:18.501 "read": true, 00:25:18.501 "write": true, 00:25:18.501 "unmap": true, 00:25:18.501 "write_zeroes": true, 00:25:18.501 "flush": true, 00:25:18.501 "reset": true, 00:25:18.501 "compare": false, 00:25:18.501 "compare_and_write": false, 00:25:18.501 "abort": true, 00:25:18.501 "nvme_admin": false, 00:25:18.501 "nvme_io": false 00:25:18.501 }, 00:25:18.501 "memory_domains": [ 00:25:18.501 { 00:25:18.501 "dma_device_id": "system", 00:25:18.501 "dma_device_type": 1 00:25:18.501 }, 00:25:18.501 { 00:25:18.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.501 "dma_device_type": 2 00:25:18.501 } 00:25:18.501 ], 00:25:18.501 "driver_specific": {} 00:25:18.501 } 00:25:18.501 ] 00:25:18.501 07:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:18.501 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:25:18.501 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:18.501 07:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:18.501 [2024-05-16 07:38:12.011313] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:18.501 [2024-05-16 07:38:12.011370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:18.501 [2024-05-16 07:38:12.011377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.501 [2024-05-16 07:38:12.011772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:18.501 [2024-05-16 07:38:12.011782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.501 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.759 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.759 "name": "Existed_Raid", 00:25:18.759 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:18.759 "strip_size_kb": 64, 00:25:18.759 "state": "configuring", 00:25:18.759 "raid_level": "concat", 00:25:18.759 "superblock": true, 00:25:18.759 "num_base_bdevs": 4, 00:25:18.759 "num_base_bdevs_discovered": 3, 00:25:18.759 "num_base_bdevs_operational": 4, 00:25:18.759 "base_bdevs_list": [ 00:25:18.759 { 00:25:18.759 "name": "BaseBdev1", 00:25:18.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.759 "is_configured": false, 00:25:18.759 "data_offset": 0, 00:25:18.759 "data_size": 0 00:25:18.759 }, 00:25:18.759 { 00:25:18.759 "name": "BaseBdev2", 00:25:18.759 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:18.759 "is_configured": true, 00:25:18.759 "data_offset": 2048, 00:25:18.759 "data_size": 63488 00:25:18.759 }, 00:25:18.759 { 00:25:18.759 "name": "BaseBdev3", 00:25:18.759 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:18.759 "is_configured": true, 00:25:18.759 "data_offset": 2048, 00:25:18.759 "data_size": 63488 00:25:18.759 }, 00:25:18.759 { 00:25:18.759 "name": "BaseBdev4", 00:25:18.759 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:18.759 "is_configured": true, 00:25:18.759 "data_offset": 2048, 00:25:18.759 "data_size": 63488 00:25:18.759 } 00:25:18.759 ] 00:25:18.759 }' 00:25:18.759 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.759 07:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.017 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:19.275 [2024-05-16 07:38:12.747297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.275 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.533 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.533 "name": "Existed_Raid", 00:25:19.533 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:19.533 "strip_size_kb": 64, 00:25:19.533 "state": "configuring", 00:25:19.533 "raid_level": "concat", 00:25:19.533 "superblock": true, 00:25:19.533 "num_base_bdevs": 4, 00:25:19.534 "num_base_bdevs_discovered": 2, 00:25:19.534 "num_base_bdevs_operational": 4, 00:25:19.534 "base_bdevs_list": [ 00:25:19.534 { 00:25:19.534 "name": "BaseBdev1", 00:25:19.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.534 "is_configured": false, 00:25:19.534 "data_offset": 0, 00:25:19.534 "data_size": 0 00:25:19.534 }, 00:25:19.534 { 00:25:19.534 "name": null, 00:25:19.534 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:19.534 "is_configured": false, 00:25:19.534 "data_offset": 2048, 00:25:19.534 "data_size": 63488 00:25:19.534 }, 00:25:19.534 { 00:25:19.534 "name": "BaseBdev3", 00:25:19.534 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:19.534 "is_configured": true, 00:25:19.534 "data_offset": 2048, 00:25:19.534 "data_size": 63488 00:25:19.534 }, 00:25:19.534 { 00:25:19.534 "name": "BaseBdev4", 00:25:19.534 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:19.534 "is_configured": true, 00:25:19.534 "data_offset": 2048, 00:25:19.534 "data_size": 63488 00:25:19.534 } 00:25:19.534 ] 00:25:19.534 }' 00:25:19.534 07:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.534 07:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.792 07:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.792 07:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:20.051 07:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:25:20.051 07:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:20.313 [2024-05-16 07:38:13.743396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:20.313 BaseBdev1 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:20.313 07:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.577 07:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:20.835 [ 00:25:20.835 { 00:25:20.835 "name": "BaseBdev1", 00:25:20.835 "aliases": [ 00:25:20.835 "4086cc2a-1357-11ef-8e8f-9dd684e56d79" 00:25:20.835 ], 00:25:20.835 "product_name": "Malloc disk", 00:25:20.835 "block_size": 512, 00:25:20.835 "num_blocks": 65536, 00:25:20.835 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:20.835 "assigned_rate_limits": { 00:25:20.835 "rw_ios_per_sec": 0, 00:25:20.835 "rw_mbytes_per_sec": 0, 00:25:20.835 "r_mbytes_per_sec": 0, 00:25:20.835 "w_mbytes_per_sec": 0 00:25:20.835 }, 00:25:20.835 "claimed": true, 00:25:20.835 "claim_type": "exclusive_write", 00:25:20.835 "zoned": false, 00:25:20.835 "supported_io_types": { 00:25:20.835 "read": true, 00:25:20.835 "write": true, 00:25:20.835 "unmap": true, 00:25:20.835 "write_zeroes": true, 00:25:20.835 "flush": true, 00:25:20.835 "reset": true, 00:25:20.835 "compare": false, 00:25:20.835 "compare_and_write": false, 00:25:20.835 "abort": true, 00:25:20.835 "nvme_admin": false, 00:25:20.835 "nvme_io": false 00:25:20.835 }, 00:25:20.835 "memory_domains": [ 00:25:20.835 { 00:25:20.835 "dma_device_id": "system", 00:25:20.835 "dma_device_type": 1 00:25:20.835 }, 00:25:20.835 { 00:25:20.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.835 "dma_device_type": 2 00:25:20.835 } 00:25:20.835 ], 00:25:20.835 "driver_specific": {} 00:25:20.835 } 00:25:20.835 ] 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.835 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.094 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.094 "name": "Existed_Raid", 00:25:21.094 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:21.094 "strip_size_kb": 64, 00:25:21.094 "state": "configuring", 00:25:21.094 "raid_level": "concat", 00:25:21.094 "superblock": true, 00:25:21.094 "num_base_bdevs": 4, 00:25:21.094 "num_base_bdevs_discovered": 3, 00:25:21.094 "num_base_bdevs_operational": 4, 00:25:21.094 "base_bdevs_list": [ 00:25:21.094 { 00:25:21.094 "name": "BaseBdev1", 00:25:21.094 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:21.094 "is_configured": true, 00:25:21.094 "data_offset": 2048, 00:25:21.094 "data_size": 63488 00:25:21.094 }, 00:25:21.094 { 00:25:21.094 "name": null, 00:25:21.094 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:21.094 "is_configured": false, 00:25:21.094 "data_offset": 2048, 00:25:21.094 "data_size": 63488 00:25:21.094 }, 00:25:21.094 { 00:25:21.094 "name": "BaseBdev3", 00:25:21.094 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:21.094 "is_configured": true, 00:25:21.094 "data_offset": 2048, 00:25:21.094 "data_size": 63488 00:25:21.094 }, 00:25:21.094 { 00:25:21.094 "name": "BaseBdev4", 00:25:21.094 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:21.094 "is_configured": true, 00:25:21.094 "data_offset": 2048, 00:25:21.094 "data_size": 63488 00:25:21.094 } 00:25:21.094 ] 00:25:21.094 }' 00:25:21.094 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.094 07:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.352 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:21.352 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.610 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:21.610 07:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:21.869 [2024-05-16 07:38:15.191295] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.869 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.127 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:22.127 "name": "Existed_Raid", 00:25:22.127 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:22.127 "strip_size_kb": 64, 00:25:22.127 "state": "configuring", 00:25:22.127 "raid_level": "concat", 00:25:22.127 "superblock": true, 00:25:22.127 "num_base_bdevs": 4, 00:25:22.127 "num_base_bdevs_discovered": 2, 00:25:22.127 "num_base_bdevs_operational": 4, 00:25:22.127 "base_bdevs_list": [ 00:25:22.127 { 00:25:22.127 "name": "BaseBdev1", 00:25:22.127 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:22.127 "is_configured": true, 00:25:22.127 "data_offset": 2048, 00:25:22.127 "data_size": 63488 00:25:22.127 }, 00:25:22.127 { 00:25:22.127 "name": null, 00:25:22.127 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:22.127 "is_configured": false, 00:25:22.127 "data_offset": 2048, 00:25:22.127 "data_size": 63488 00:25:22.127 }, 00:25:22.127 { 00:25:22.127 "name": null, 00:25:22.127 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:22.127 "is_configured": false, 00:25:22.127 "data_offset": 2048, 00:25:22.127 "data_size": 63488 00:25:22.127 }, 00:25:22.127 { 00:25:22.127 "name": "BaseBdev4", 00:25:22.127 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:22.127 "is_configured": true, 00:25:22.127 "data_offset": 2048, 00:25:22.127 "data_size": 63488 00:25:22.127 } 00:25:22.127 ] 00:25:22.127 }' 00:25:22.127 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:22.127 07:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.386 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.386 07:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:22.645 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:25:22.645 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:22.904 [2024-05-16 07:38:16.263290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.904 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.162 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:23.162 "name": "Existed_Raid", 00:25:23.162 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:23.162 "strip_size_kb": 64, 00:25:23.162 "state": "configuring", 00:25:23.162 "raid_level": "concat", 00:25:23.162 "superblock": true, 00:25:23.162 "num_base_bdevs": 4, 00:25:23.162 "num_base_bdevs_discovered": 3, 00:25:23.162 "num_base_bdevs_operational": 4, 00:25:23.162 "base_bdevs_list": [ 00:25:23.162 { 00:25:23.162 "name": "BaseBdev1", 00:25:23.162 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:23.162 "is_configured": true, 00:25:23.162 "data_offset": 2048, 00:25:23.162 "data_size": 63488 00:25:23.162 }, 00:25:23.162 { 00:25:23.162 "name": null, 00:25:23.162 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:23.162 "is_configured": false, 00:25:23.162 "data_offset": 2048, 00:25:23.162 "data_size": 63488 00:25:23.162 }, 00:25:23.163 { 00:25:23.163 "name": "BaseBdev3", 00:25:23.163 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:23.163 "is_configured": true, 00:25:23.163 "data_offset": 2048, 00:25:23.163 "data_size": 63488 00:25:23.163 }, 00:25:23.163 { 00:25:23.163 "name": "BaseBdev4", 00:25:23.163 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:23.163 "is_configured": true, 00:25:23.163 "data_offset": 2048, 00:25:23.163 "data_size": 63488 00:25:23.163 } 00:25:23.163 ] 00:25:23.163 }' 00:25:23.163 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:23.163 07:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.420 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.420 07:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:23.678 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:25:23.678 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:23.934 [2024-05-16 07:38:17.403287] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.934 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.191 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.191 "name": "Existed_Raid", 00:25:24.191 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:24.191 "strip_size_kb": 64, 00:25:24.191 "state": "configuring", 00:25:24.191 "raid_level": "concat", 00:25:24.191 "superblock": true, 00:25:24.191 "num_base_bdevs": 4, 00:25:24.191 "num_base_bdevs_discovered": 2, 00:25:24.191 "num_base_bdevs_operational": 4, 00:25:24.191 "base_bdevs_list": [ 00:25:24.191 { 00:25:24.191 "name": null, 00:25:24.191 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:24.191 "is_configured": false, 00:25:24.191 "data_offset": 2048, 00:25:24.191 "data_size": 63488 00:25:24.191 }, 00:25:24.191 { 00:25:24.191 "name": null, 00:25:24.191 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:24.191 "is_configured": false, 00:25:24.191 "data_offset": 2048, 00:25:24.191 "data_size": 63488 00:25:24.191 }, 00:25:24.191 { 00:25:24.191 "name": "BaseBdev3", 00:25:24.191 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:24.191 "is_configured": true, 00:25:24.191 "data_offset": 2048, 00:25:24.191 "data_size": 63488 00:25:24.191 }, 00:25:24.191 { 00:25:24.191 "name": "BaseBdev4", 00:25:24.191 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:24.191 "is_configured": true, 00:25:24.191 "data_offset": 2048, 00:25:24.191 "data_size": 63488 00:25:24.191 } 00:25:24.191 ] 00:25:24.191 }' 00:25:24.191 07:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.191 07:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.756 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.756 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:24.756 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:25:24.756 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:25.013 [2024-05-16 07:38:18.464091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.013 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.272 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:25.272 "name": "Existed_Raid", 00:25:25.272 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:25.272 "strip_size_kb": 64, 00:25:25.272 "state": "configuring", 00:25:25.272 "raid_level": "concat", 00:25:25.272 "superblock": true, 00:25:25.272 "num_base_bdevs": 4, 00:25:25.272 "num_base_bdevs_discovered": 3, 00:25:25.272 "num_base_bdevs_operational": 4, 00:25:25.272 "base_bdevs_list": [ 00:25:25.272 { 00:25:25.272 "name": null, 00:25:25.272 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:25.272 "is_configured": false, 00:25:25.272 "data_offset": 2048, 00:25:25.272 "data_size": 63488 00:25:25.272 }, 00:25:25.272 { 00:25:25.272 "name": "BaseBdev2", 00:25:25.272 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:25.272 "is_configured": true, 00:25:25.272 "data_offset": 2048, 00:25:25.272 "data_size": 63488 00:25:25.272 }, 00:25:25.272 { 00:25:25.272 "name": "BaseBdev3", 00:25:25.272 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:25.272 "is_configured": true, 00:25:25.272 "data_offset": 2048, 00:25:25.272 "data_size": 63488 00:25:25.272 }, 00:25:25.272 { 00:25:25.272 "name": "BaseBdev4", 00:25:25.272 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:25.272 "is_configured": true, 00:25:25.272 "data_offset": 2048, 00:25:25.272 "data_size": 63488 00:25:25.272 } 00:25:25.272 ] 00:25:25.272 }' 00:25:25.272 07:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:25.272 07:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.530 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.530 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:25.788 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:25:25.788 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.788 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:26.045 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4086cc2a-1357-11ef-8e8f-9dd684e56d79 00:25:26.302 [2024-05-16 07:38:19.776189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:26.302 [2024-05-16 07:38:19.776226] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82af94f00 00:25:26.302 [2024-05-16 07:38:19.776231] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:26.302 [2024-05-16 07:38:19.776248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aff7e20 00:25:26.302 [2024-05-16 07:38:19.776281] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82af94f00 00:25:26.302 [2024-05-16 07:38:19.776284] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82af94f00 00:25:26.302 [2024-05-16 07:38:19.776299] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.302 NewBaseBdev 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:26.302 07:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:26.558 07:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:26.815 [ 00:25:26.815 { 00:25:26.815 "name": "NewBaseBdev", 00:25:26.815 "aliases": [ 00:25:26.815 "4086cc2a-1357-11ef-8e8f-9dd684e56d79" 00:25:26.815 ], 00:25:26.815 "product_name": "Malloc disk", 00:25:26.815 "block_size": 512, 00:25:26.815 "num_blocks": 65536, 00:25:26.815 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:26.815 "assigned_rate_limits": { 00:25:26.815 "rw_ios_per_sec": 0, 00:25:26.815 "rw_mbytes_per_sec": 0, 00:25:26.815 "r_mbytes_per_sec": 0, 00:25:26.815 "w_mbytes_per_sec": 0 00:25:26.815 }, 00:25:26.815 "claimed": true, 00:25:26.815 "claim_type": "exclusive_write", 00:25:26.815 "zoned": false, 00:25:26.815 "supported_io_types": { 00:25:26.815 "read": true, 00:25:26.815 "write": true, 00:25:26.815 "unmap": true, 00:25:26.815 "write_zeroes": true, 00:25:26.815 "flush": true, 00:25:26.815 "reset": true, 00:25:26.815 "compare": false, 00:25:26.815 "compare_and_write": false, 00:25:26.815 "abort": true, 00:25:26.815 "nvme_admin": false, 00:25:26.815 "nvme_io": false 00:25:26.815 }, 00:25:26.815 "memory_domains": [ 00:25:26.815 { 00:25:26.815 "dma_device_id": "system", 00:25:26.815 "dma_device_type": 1 00:25:26.815 }, 00:25:26.815 { 00:25:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.815 "dma_device_type": 2 00:25:26.815 } 00:25:26.815 ], 00:25:26.815 "driver_specific": {} 00:25:26.815 } 00:25:26.815 ] 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.815 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.072 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.072 "name": "Existed_Raid", 00:25:27.072 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:27.072 "strip_size_kb": 64, 00:25:27.072 "state": "online", 00:25:27.072 "raid_level": "concat", 00:25:27.072 "superblock": true, 00:25:27.072 "num_base_bdevs": 4, 00:25:27.072 "num_base_bdevs_discovered": 4, 00:25:27.072 "num_base_bdevs_operational": 4, 00:25:27.072 "base_bdevs_list": [ 00:25:27.072 { 00:25:27.072 "name": "NewBaseBdev", 00:25:27.072 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:27.072 "is_configured": true, 00:25:27.072 "data_offset": 2048, 00:25:27.072 "data_size": 63488 00:25:27.072 }, 00:25:27.072 { 00:25:27.072 "name": "BaseBdev2", 00:25:27.072 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:27.072 "is_configured": true, 00:25:27.072 "data_offset": 2048, 00:25:27.072 "data_size": 63488 00:25:27.072 }, 00:25:27.072 { 00:25:27.072 "name": "BaseBdev3", 00:25:27.072 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:27.072 "is_configured": true, 00:25:27.072 "data_offset": 2048, 00:25:27.072 "data_size": 63488 00:25:27.072 }, 00:25:27.072 { 00:25:27.072 "name": "BaseBdev4", 00:25:27.072 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:27.072 "is_configured": true, 00:25:27.072 "data_offset": 2048, 00:25:27.072 "data_size": 63488 00:25:27.072 } 00:25:27.072 ] 00:25:27.072 }' 00:25:27.072 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.072 07:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:27.330 07:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:27.587 [2024-05-16 07:38:21.140184] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.845 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:27.845 "name": "Existed_Raid", 00:25:27.845 "aliases": [ 00:25:27.845 "3f7e83bf-1357-11ef-8e8f-9dd684e56d79" 00:25:27.845 ], 00:25:27.845 "product_name": "Raid Volume", 00:25:27.845 "block_size": 512, 00:25:27.845 "num_blocks": 253952, 00:25:27.845 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:27.845 "assigned_rate_limits": { 00:25:27.845 "rw_ios_per_sec": 0, 00:25:27.845 "rw_mbytes_per_sec": 0, 00:25:27.845 "r_mbytes_per_sec": 0, 00:25:27.845 "w_mbytes_per_sec": 0 00:25:27.845 }, 00:25:27.845 "claimed": false, 00:25:27.845 "zoned": false, 00:25:27.845 "supported_io_types": { 00:25:27.845 "read": true, 00:25:27.845 "write": true, 00:25:27.845 "unmap": true, 00:25:27.845 "write_zeroes": true, 00:25:27.845 "flush": true, 00:25:27.845 "reset": true, 00:25:27.845 "compare": false, 00:25:27.845 "compare_and_write": false, 00:25:27.845 "abort": false, 00:25:27.845 "nvme_admin": false, 00:25:27.845 "nvme_io": false 00:25:27.845 }, 00:25:27.845 "memory_domains": [ 00:25:27.845 { 00:25:27.845 "dma_device_id": "system", 00:25:27.845 "dma_device_type": 1 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.845 "dma_device_type": 2 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "system", 00:25:27.845 "dma_device_type": 1 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.845 "dma_device_type": 2 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "system", 00:25:27.845 "dma_device_type": 1 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.845 "dma_device_type": 2 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "system", 00:25:27.845 "dma_device_type": 1 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.845 "dma_device_type": 2 00:25:27.845 } 00:25:27.845 ], 00:25:27.845 "driver_specific": { 00:25:27.845 "raid": { 00:25:27.845 "uuid": "3f7e83bf-1357-11ef-8e8f-9dd684e56d79", 00:25:27.845 "strip_size_kb": 64, 00:25:27.845 "state": "online", 00:25:27.845 "raid_level": "concat", 00:25:27.845 "superblock": true, 00:25:27.845 "num_base_bdevs": 4, 00:25:27.845 "num_base_bdevs_discovered": 4, 00:25:27.845 "num_base_bdevs_operational": 4, 00:25:27.845 "base_bdevs_list": [ 00:25:27.845 { 00:25:27.845 "name": "NewBaseBdev", 00:25:27.845 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:27.845 "is_configured": true, 00:25:27.845 "data_offset": 2048, 00:25:27.845 "data_size": 63488 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "name": "BaseBdev2", 00:25:27.845 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:27.845 "is_configured": true, 00:25:27.845 "data_offset": 2048, 00:25:27.845 "data_size": 63488 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "name": "BaseBdev3", 00:25:27.845 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:27.845 "is_configured": true, 00:25:27.845 "data_offset": 2048, 00:25:27.845 "data_size": 63488 00:25:27.845 }, 00:25:27.845 { 00:25:27.845 "name": "BaseBdev4", 00:25:27.845 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:27.845 "is_configured": true, 00:25:27.845 "data_offset": 2048, 00:25:27.845 "data_size": 63488 00:25:27.845 } 00:25:27.845 ] 00:25:27.845 } 00:25:27.845 } 00:25:27.845 }' 00:25:27.845 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:27.845 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:25:27.845 BaseBdev2 00:25:27.845 BaseBdev3 00:25:27.845 BaseBdev4' 00:25:27.845 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:27.845 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:27.845 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:28.103 "name": "NewBaseBdev", 00:25:28.103 "aliases": [ 00:25:28.103 "4086cc2a-1357-11ef-8e8f-9dd684e56d79" 00:25:28.103 ], 00:25:28.103 "product_name": "Malloc disk", 00:25:28.103 "block_size": 512, 00:25:28.103 "num_blocks": 65536, 00:25:28.103 "uuid": "4086cc2a-1357-11ef-8e8f-9dd684e56d79", 00:25:28.103 "assigned_rate_limits": { 00:25:28.103 "rw_ios_per_sec": 0, 00:25:28.103 "rw_mbytes_per_sec": 0, 00:25:28.103 "r_mbytes_per_sec": 0, 00:25:28.103 "w_mbytes_per_sec": 0 00:25:28.103 }, 00:25:28.103 "claimed": true, 00:25:28.103 "claim_type": "exclusive_write", 00:25:28.103 "zoned": false, 00:25:28.103 "supported_io_types": { 00:25:28.103 "read": true, 00:25:28.103 "write": true, 00:25:28.103 "unmap": true, 00:25:28.103 "write_zeroes": true, 00:25:28.103 "flush": true, 00:25:28.103 "reset": true, 00:25:28.103 "compare": false, 00:25:28.103 "compare_and_write": false, 00:25:28.103 "abort": true, 00:25:28.103 "nvme_admin": false, 00:25:28.103 "nvme_io": false 00:25:28.103 }, 00:25:28.103 "memory_domains": [ 00:25:28.103 { 00:25:28.103 "dma_device_id": "system", 00:25:28.103 "dma_device_type": 1 00:25:28.103 }, 00:25:28.103 { 00:25:28.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.103 "dma_device_type": 2 00:25:28.103 } 00:25:28.103 ], 00:25:28.103 "driver_specific": {} 00:25:28.103 }' 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:28.103 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:28.104 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:28.104 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:28.362 "name": "BaseBdev2", 00:25:28.362 "aliases": [ 00:25:28.362 "3e43936f-1357-11ef-8e8f-9dd684e56d79" 00:25:28.362 ], 00:25:28.362 "product_name": "Malloc disk", 00:25:28.362 "block_size": 512, 00:25:28.362 "num_blocks": 65536, 00:25:28.362 "uuid": "3e43936f-1357-11ef-8e8f-9dd684e56d79", 00:25:28.362 "assigned_rate_limits": { 00:25:28.362 "rw_ios_per_sec": 0, 00:25:28.362 "rw_mbytes_per_sec": 0, 00:25:28.362 "r_mbytes_per_sec": 0, 00:25:28.362 "w_mbytes_per_sec": 0 00:25:28.362 }, 00:25:28.362 "claimed": true, 00:25:28.362 "claim_type": "exclusive_write", 00:25:28.362 "zoned": false, 00:25:28.362 "supported_io_types": { 00:25:28.362 "read": true, 00:25:28.362 "write": true, 00:25:28.362 "unmap": true, 00:25:28.362 "write_zeroes": true, 00:25:28.362 "flush": true, 00:25:28.362 "reset": true, 00:25:28.362 "compare": false, 00:25:28.362 "compare_and_write": false, 00:25:28.362 "abort": true, 00:25:28.362 "nvme_admin": false, 00:25:28.362 "nvme_io": false 00:25:28.362 }, 00:25:28.362 "memory_domains": [ 00:25:28.362 { 00:25:28.362 "dma_device_id": "system", 00:25:28.362 "dma_device_type": 1 00:25:28.362 }, 00:25:28.362 { 00:25:28.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.362 "dma_device_type": 2 00:25:28.362 } 00:25:28.362 ], 00:25:28.362 "driver_specific": {} 00:25:28.362 }' 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:28.362 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:28.621 07:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:28.621 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:28.621 "name": "BaseBdev3", 00:25:28.621 "aliases": [ 00:25:28.621 "3eb5b608-1357-11ef-8e8f-9dd684e56d79" 00:25:28.621 ], 00:25:28.621 "product_name": "Malloc disk", 00:25:28.621 "block_size": 512, 00:25:28.621 "num_blocks": 65536, 00:25:28.621 "uuid": "3eb5b608-1357-11ef-8e8f-9dd684e56d79", 00:25:28.621 "assigned_rate_limits": { 00:25:28.621 "rw_ios_per_sec": 0, 00:25:28.621 "rw_mbytes_per_sec": 0, 00:25:28.621 "r_mbytes_per_sec": 0, 00:25:28.621 "w_mbytes_per_sec": 0 00:25:28.621 }, 00:25:28.621 "claimed": true, 00:25:28.621 "claim_type": "exclusive_write", 00:25:28.621 "zoned": false, 00:25:28.621 "supported_io_types": { 00:25:28.621 "read": true, 00:25:28.621 "write": true, 00:25:28.621 "unmap": true, 00:25:28.621 "write_zeroes": true, 00:25:28.621 "flush": true, 00:25:28.621 "reset": true, 00:25:28.621 "compare": false, 00:25:28.621 "compare_and_write": false, 00:25:28.621 "abort": true, 00:25:28.621 "nvme_admin": false, 00:25:28.621 "nvme_io": false 00:25:28.621 }, 00:25:28.621 "memory_domains": [ 00:25:28.621 { 00:25:28.621 "dma_device_id": "system", 00:25:28.621 "dma_device_type": 1 00:25:28.621 }, 00:25:28.621 { 00:25:28.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.621 "dma_device_type": 2 00:25:28.621 } 00:25:28.621 ], 00:25:28.621 "driver_specific": {} 00:25:28.621 }' 00:25:28.621 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:28.879 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:29.138 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:29.138 "name": "BaseBdev4", 00:25:29.138 "aliases": [ 00:25:29.138 "3f1b073c-1357-11ef-8e8f-9dd684e56d79" 00:25:29.138 ], 00:25:29.138 "product_name": "Malloc disk", 00:25:29.138 "block_size": 512, 00:25:29.138 "num_blocks": 65536, 00:25:29.138 "uuid": "3f1b073c-1357-11ef-8e8f-9dd684e56d79", 00:25:29.138 "assigned_rate_limits": { 00:25:29.138 "rw_ios_per_sec": 0, 00:25:29.138 "rw_mbytes_per_sec": 0, 00:25:29.138 "r_mbytes_per_sec": 0, 00:25:29.138 "w_mbytes_per_sec": 0 00:25:29.138 }, 00:25:29.138 "claimed": true, 00:25:29.138 "claim_type": "exclusive_write", 00:25:29.138 "zoned": false, 00:25:29.138 "supported_io_types": { 00:25:29.138 "read": true, 00:25:29.138 "write": true, 00:25:29.138 "unmap": true, 00:25:29.138 "write_zeroes": true, 00:25:29.138 "flush": true, 00:25:29.138 "reset": true, 00:25:29.138 "compare": false, 00:25:29.138 "compare_and_write": false, 00:25:29.138 "abort": true, 00:25:29.138 "nvme_admin": false, 00:25:29.138 "nvme_io": false 00:25:29.138 }, 00:25:29.138 "memory_domains": [ 00:25:29.138 { 00:25:29.138 "dma_device_id": "system", 00:25:29.138 "dma_device_type": 1 00:25:29.138 }, 00:25:29.138 { 00:25:29.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.138 "dma_device_type": 2 00:25:29.138 } 00:25:29.138 ], 00:25:29.138 "driver_specific": {} 00:25:29.138 }' 00:25:29.138 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:29.138 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:29.139 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:29.397 [2024-05-16 07:38:22.860190] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.397 [2024-05-16 07:38:22.860219] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:29.397 [2024-05-16 07:38:22.860240] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:29.397 [2024-05-16 07:38:22.860256] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:29.397 [2024-05-16 07:38:22.860260] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82af94f00 name Existed_Raid, state offline 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 60489 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 60489 ']' 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 60489 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 60489 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:25:29.397 killing process with pid 60489 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60489' 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 60489 00:25:29.397 [2024-05-16 07:38:22.892241] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:29.397 07:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 60489 00:25:29.397 [2024-05-16 07:38:22.911546] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:29.658 07:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:25:29.658 00:25:29.658 real 0m25.714s 00:25:29.658 user 0m47.030s 00:25:29.658 sys 0m3.607s 00:25:29.658 07:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:29.658 ************************************ 00:25:29.659 END TEST raid_state_function_test_sb 00:25:29.659 ************************************ 00:25:29.659 07:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.659 07:38:23 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:25:29.659 07:38:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:29.659 07:38:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:29.659 07:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:29.659 ************************************ 00:25:29.659 START TEST raid_superblock_test 00:25:29.659 ************************************ 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61299 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61299 /var/tmp/spdk-raid.sock 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 61299 ']' 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:29.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:29.659 07:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.659 [2024-05-16 07:38:23.139014] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:25:29.659 [2024-05-16 07:38:23.139199] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:25:30.225 EAL: TSC is not safe to use in SMP mode 00:25:30.225 EAL: TSC is not invariant 00:25:30.225 [2024-05-16 07:38:23.602174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.225 [2024-05-16 07:38:23.696157] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:30.225 [2024-05-16 07:38:23.698819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.225 [2024-05-16 07:38:23.699743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.225 [2024-05-16 07:38:23.699761] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:30.874 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:31.132 malloc1 00:25:31.132 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:31.389 [2024-05-16 07:38:24.696259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:31.389 [2024-05-16 07:38:24.696321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.389 [2024-05-16 07:38:24.696997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c6780 00:25:31.389 [2024-05-16 07:38:24.697034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.389 [2024-05-16 07:38:24.697796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.389 [2024-05-16 07:38:24.697830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:31.389 pt1 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:31.389 07:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:31.647 malloc2 00:25:31.647 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:31.905 [2024-05-16 07:38:25.288266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:31.905 [2024-05-16 07:38:25.288331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.905 [2024-05-16 07:38:25.288361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c6c80 00:25:31.905 [2024-05-16 07:38:25.288370] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.905 [2024-05-16 07:38:25.288921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.906 [2024-05-16 07:38:25.288957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:31.906 pt2 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:31.906 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:32.165 malloc3 00:25:32.165 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:32.423 [2024-05-16 07:38:25.848264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:32.423 [2024-05-16 07:38:25.848328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.423 [2024-05-16 07:38:25.848367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c7180 00:25:32.423 [2024-05-16 07:38:25.848376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.423 [2024-05-16 07:38:25.848951] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.423 [2024-05-16 07:38:25.848996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:32.423 pt3 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:32.423 07:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:32.681 malloc4 00:25:32.681 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:32.939 [2024-05-16 07:38:26.396254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:32.939 [2024-05-16 07:38:26.396311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.939 [2024-05-16 07:38:26.396336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c7680 00:25:32.939 [2024-05-16 07:38:26.396344] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.939 [2024-05-16 07:38:26.396799] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.939 [2024-05-16 07:38:26.396832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:32.939 pt4 00:25:32.939 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:32.939 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:32.939 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:33.196 [2024-05-16 07:38:26.596264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:33.196 [2024-05-16 07:38:26.596683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:33.197 [2024-05-16 07:38:26.596698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:33.197 [2024-05-16 07:38:26.596708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:33.197 [2024-05-16 07:38:26.596759] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c4c7900 00:25:33.197 [2024-05-16 07:38:26.596764] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:33.197 [2024-05-16 07:38:26.596813] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c529e20 00:25:33.197 [2024-05-16 07:38:26.596871] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c4c7900 00:25:33.197 [2024-05-16 07:38:26.596875] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c4c7900 00:25:33.197 [2024-05-16 07:38:26.596900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.197 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.454 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:33.454 "name": "raid_bdev1", 00:25:33.454 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:33.454 "strip_size_kb": 64, 00:25:33.454 "state": "online", 00:25:33.454 "raid_level": "concat", 00:25:33.454 "superblock": true, 00:25:33.454 "num_base_bdevs": 4, 00:25:33.454 "num_base_bdevs_discovered": 4, 00:25:33.454 "num_base_bdevs_operational": 4, 00:25:33.454 "base_bdevs_list": [ 00:25:33.454 { 00:25:33.454 "name": "pt1", 00:25:33.454 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:33.454 "is_configured": true, 00:25:33.454 "data_offset": 2048, 00:25:33.454 "data_size": 63488 00:25:33.455 }, 00:25:33.455 { 00:25:33.455 "name": "pt2", 00:25:33.455 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:33.455 "is_configured": true, 00:25:33.455 "data_offset": 2048, 00:25:33.455 "data_size": 63488 00:25:33.455 }, 00:25:33.455 { 00:25:33.455 "name": "pt3", 00:25:33.455 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:33.455 "is_configured": true, 00:25:33.455 "data_offset": 2048, 00:25:33.455 "data_size": 63488 00:25:33.455 }, 00:25:33.455 { 00:25:33.455 "name": "pt4", 00:25:33.455 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:33.455 "is_configured": true, 00:25:33.455 "data_offset": 2048, 00:25:33.455 "data_size": 63488 00:25:33.455 } 00:25:33.455 ] 00:25:33.455 }' 00:25:33.455 07:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:33.455 07:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:33.712 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:33.713 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:33.971 [2024-05-16 07:38:27.364266] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:33.971 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:33.971 "name": "raid_bdev1", 00:25:33.971 "aliases": [ 00:25:33.971 "48300045-1357-11ef-8e8f-9dd684e56d79" 00:25:33.971 ], 00:25:33.971 "product_name": "Raid Volume", 00:25:33.971 "block_size": 512, 00:25:33.971 "num_blocks": 253952, 00:25:33.971 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:33.971 "assigned_rate_limits": { 00:25:33.971 "rw_ios_per_sec": 0, 00:25:33.971 "rw_mbytes_per_sec": 0, 00:25:33.971 "r_mbytes_per_sec": 0, 00:25:33.971 "w_mbytes_per_sec": 0 00:25:33.971 }, 00:25:33.971 "claimed": false, 00:25:33.971 "zoned": false, 00:25:33.971 "supported_io_types": { 00:25:33.971 "read": true, 00:25:33.971 "write": true, 00:25:33.971 "unmap": true, 00:25:33.971 "write_zeroes": true, 00:25:33.971 "flush": true, 00:25:33.971 "reset": true, 00:25:33.971 "compare": false, 00:25:33.971 "compare_and_write": false, 00:25:33.971 "abort": false, 00:25:33.971 "nvme_admin": false, 00:25:33.971 "nvme_io": false 00:25:33.971 }, 00:25:33.971 "memory_domains": [ 00:25:33.971 { 00:25:33.971 "dma_device_id": "system", 00:25:33.971 "dma_device_type": 1 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.971 "dma_device_type": 2 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "system", 00:25:33.971 "dma_device_type": 1 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.971 "dma_device_type": 2 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "system", 00:25:33.971 "dma_device_type": 1 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.971 "dma_device_type": 2 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "system", 00:25:33.971 "dma_device_type": 1 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.971 "dma_device_type": 2 00:25:33.971 } 00:25:33.971 ], 00:25:33.971 "driver_specific": { 00:25:33.971 "raid": { 00:25:33.971 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:33.971 "strip_size_kb": 64, 00:25:33.971 "state": "online", 00:25:33.971 "raid_level": "concat", 00:25:33.971 "superblock": true, 00:25:33.971 "num_base_bdevs": 4, 00:25:33.971 "num_base_bdevs_discovered": 4, 00:25:33.971 "num_base_bdevs_operational": 4, 00:25:33.971 "base_bdevs_list": [ 00:25:33.971 { 00:25:33.971 "name": "pt1", 00:25:33.971 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:33.971 "is_configured": true, 00:25:33.971 "data_offset": 2048, 00:25:33.971 "data_size": 63488 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "name": "pt2", 00:25:33.971 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:33.971 "is_configured": true, 00:25:33.971 "data_offset": 2048, 00:25:33.971 "data_size": 63488 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "name": "pt3", 00:25:33.971 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:33.971 "is_configured": true, 00:25:33.971 "data_offset": 2048, 00:25:33.971 "data_size": 63488 00:25:33.971 }, 00:25:33.971 { 00:25:33.971 "name": "pt4", 00:25:33.971 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:33.971 "is_configured": true, 00:25:33.971 "data_offset": 2048, 00:25:33.971 "data_size": 63488 00:25:33.971 } 00:25:33.971 ] 00:25:33.971 } 00:25:33.971 } 00:25:33.971 }' 00:25:33.971 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:33.971 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:33.971 pt2 00:25:33.971 pt3 00:25:33.971 pt4' 00:25:33.971 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:33.971 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:33.971 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:34.229 "name": "pt1", 00:25:34.229 "aliases": [ 00:25:34.229 "c1b48963-99e8-715e-9157-29ca7b236179" 00:25:34.229 ], 00:25:34.229 "product_name": "passthru", 00:25:34.229 "block_size": 512, 00:25:34.229 "num_blocks": 65536, 00:25:34.229 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:34.229 "assigned_rate_limits": { 00:25:34.229 "rw_ios_per_sec": 0, 00:25:34.229 "rw_mbytes_per_sec": 0, 00:25:34.229 "r_mbytes_per_sec": 0, 00:25:34.229 "w_mbytes_per_sec": 0 00:25:34.229 }, 00:25:34.229 "claimed": true, 00:25:34.229 "claim_type": "exclusive_write", 00:25:34.229 "zoned": false, 00:25:34.229 "supported_io_types": { 00:25:34.229 "read": true, 00:25:34.229 "write": true, 00:25:34.229 "unmap": true, 00:25:34.229 "write_zeroes": true, 00:25:34.229 "flush": true, 00:25:34.229 "reset": true, 00:25:34.229 "compare": false, 00:25:34.229 "compare_and_write": false, 00:25:34.229 "abort": true, 00:25:34.229 "nvme_admin": false, 00:25:34.229 "nvme_io": false 00:25:34.229 }, 00:25:34.229 "memory_domains": [ 00:25:34.229 { 00:25:34.229 "dma_device_id": "system", 00:25:34.229 "dma_device_type": 1 00:25:34.229 }, 00:25:34.229 { 00:25:34.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.229 "dma_device_type": 2 00:25:34.229 } 00:25:34.229 ], 00:25:34.229 "driver_specific": { 00:25:34.229 "passthru": { 00:25:34.229 "name": "pt1", 00:25:34.229 "base_bdev_name": "malloc1" 00:25:34.229 } 00:25:34.229 } 00:25:34.229 }' 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:34.229 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:34.487 "name": "pt2", 00:25:34.487 "aliases": [ 00:25:34.487 "336ef110-9548-1956-b65d-f4759252e3e0" 00:25:34.487 ], 00:25:34.487 "product_name": "passthru", 00:25:34.487 "block_size": 512, 00:25:34.487 "num_blocks": 65536, 00:25:34.487 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:34.487 "assigned_rate_limits": { 00:25:34.487 "rw_ios_per_sec": 0, 00:25:34.487 "rw_mbytes_per_sec": 0, 00:25:34.487 "r_mbytes_per_sec": 0, 00:25:34.487 "w_mbytes_per_sec": 0 00:25:34.487 }, 00:25:34.487 "claimed": true, 00:25:34.487 "claim_type": "exclusive_write", 00:25:34.487 "zoned": false, 00:25:34.487 "supported_io_types": { 00:25:34.487 "read": true, 00:25:34.487 "write": true, 00:25:34.487 "unmap": true, 00:25:34.487 "write_zeroes": true, 00:25:34.487 "flush": true, 00:25:34.487 "reset": true, 00:25:34.487 "compare": false, 00:25:34.487 "compare_and_write": false, 00:25:34.487 "abort": true, 00:25:34.487 "nvme_admin": false, 00:25:34.487 "nvme_io": false 00:25:34.487 }, 00:25:34.487 "memory_domains": [ 00:25:34.487 { 00:25:34.487 "dma_device_id": "system", 00:25:34.487 "dma_device_type": 1 00:25:34.487 }, 00:25:34.487 { 00:25:34.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.487 "dma_device_type": 2 00:25:34.487 } 00:25:34.487 ], 00:25:34.487 "driver_specific": { 00:25:34.487 "passthru": { 00:25:34.487 "name": "pt2", 00:25:34.487 "base_bdev_name": "malloc2" 00:25:34.487 } 00:25:34.487 } 00:25:34.487 }' 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:34.487 07:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:34.745 "name": "pt3", 00:25:34.745 "aliases": [ 00:25:34.745 "67944bba-e56d-1459-a4ee-109083ab9208" 00:25:34.745 ], 00:25:34.745 "product_name": "passthru", 00:25:34.745 "block_size": 512, 00:25:34.745 "num_blocks": 65536, 00:25:34.745 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:34.745 "assigned_rate_limits": { 00:25:34.745 "rw_ios_per_sec": 0, 00:25:34.745 "rw_mbytes_per_sec": 0, 00:25:34.745 "r_mbytes_per_sec": 0, 00:25:34.745 "w_mbytes_per_sec": 0 00:25:34.745 }, 00:25:34.745 "claimed": true, 00:25:34.745 "claim_type": "exclusive_write", 00:25:34.745 "zoned": false, 00:25:34.745 "supported_io_types": { 00:25:34.745 "read": true, 00:25:34.745 "write": true, 00:25:34.745 "unmap": true, 00:25:34.745 "write_zeroes": true, 00:25:34.745 "flush": true, 00:25:34.745 "reset": true, 00:25:34.745 "compare": false, 00:25:34.745 "compare_and_write": false, 00:25:34.745 "abort": true, 00:25:34.745 "nvme_admin": false, 00:25:34.745 "nvme_io": false 00:25:34.745 }, 00:25:34.745 "memory_domains": [ 00:25:34.745 { 00:25:34.745 "dma_device_id": "system", 00:25:34.745 "dma_device_type": 1 00:25:34.745 }, 00:25:34.745 { 00:25:34.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.745 "dma_device_type": 2 00:25:34.745 } 00:25:34.745 ], 00:25:34.745 "driver_specific": { 00:25:34.745 "passthru": { 00:25:34.745 "name": "pt3", 00:25:34.745 "base_bdev_name": "malloc3" 00:25:34.745 } 00:25:34.745 } 00:25:34.745 }' 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:34.745 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:35.040 "name": "pt4", 00:25:35.040 "aliases": [ 00:25:35.040 "4f27db46-0105-5950-9fb6-c3fc59d53242" 00:25:35.040 ], 00:25:35.040 "product_name": "passthru", 00:25:35.040 "block_size": 512, 00:25:35.040 "num_blocks": 65536, 00:25:35.040 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:35.040 "assigned_rate_limits": { 00:25:35.040 "rw_ios_per_sec": 0, 00:25:35.040 "rw_mbytes_per_sec": 0, 00:25:35.040 "r_mbytes_per_sec": 0, 00:25:35.040 "w_mbytes_per_sec": 0 00:25:35.040 }, 00:25:35.040 "claimed": true, 00:25:35.040 "claim_type": "exclusive_write", 00:25:35.040 "zoned": false, 00:25:35.040 "supported_io_types": { 00:25:35.040 "read": true, 00:25:35.040 "write": true, 00:25:35.040 "unmap": true, 00:25:35.040 "write_zeroes": true, 00:25:35.040 "flush": true, 00:25:35.040 "reset": true, 00:25:35.040 "compare": false, 00:25:35.040 "compare_and_write": false, 00:25:35.040 "abort": true, 00:25:35.040 "nvme_admin": false, 00:25:35.040 "nvme_io": false 00:25:35.040 }, 00:25:35.040 "memory_domains": [ 00:25:35.040 { 00:25:35.040 "dma_device_id": "system", 00:25:35.040 "dma_device_type": 1 00:25:35.040 }, 00:25:35.040 { 00:25:35.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.040 "dma_device_type": 2 00:25:35.040 } 00:25:35.040 ], 00:25:35.040 "driver_specific": { 00:25:35.040 "passthru": { 00:25:35.040 "name": "pt4", 00:25:35.040 "base_bdev_name": "malloc4" 00:25:35.040 } 00:25:35.040 } 00:25:35.040 }' 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:35.040 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:35.041 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.041 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:35.041 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:35.041 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:35.041 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:35.041 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:35.300 [2024-05-16 07:38:28.736258] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.300 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=48300045-1357-11ef-8e8f-9dd684e56d79 00:25:35.300 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 48300045-1357-11ef-8e8f-9dd684e56d79 ']' 00:25:35.300 07:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:35.558 [2024-05-16 07:38:29.072242] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:35.558 [2024-05-16 07:38:29.072267] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.558 [2024-05-16 07:38:29.072286] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.558 [2024-05-16 07:38:29.072301] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.558 [2024-05-16 07:38:29.072305] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4c7900 name raid_bdev1, state offline 00:25:35.558 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.558 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:36.125 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:36.125 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:36.125 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.125 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:36.125 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.125 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:36.383 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.383 07:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:36.641 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:36.641 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:36.899 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:36.899 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:37.158 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:37.159 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:37.417 [2024-05-16 07:38:30.948267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:37.417 [2024-05-16 07:38:30.948734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:37.417 [2024-05-16 07:38:30.948747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:37.417 [2024-05-16 07:38:30.948754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:37.417 [2024-05-16 07:38:30.948768] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:37.417 [2024-05-16 07:38:30.948826] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:37.417 [2024-05-16 07:38:30.948838] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:37.417 [2024-05-16 07:38:30.948847] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:37.417 [2024-05-16 07:38:30.948856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.417 [2024-05-16 07:38:30.948860] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4c7680 name raid_bdev1, state configuring 00:25:37.417 request: 00:25:37.417 { 00:25:37.417 "name": "raid_bdev1", 00:25:37.417 "raid_level": "concat", 00:25:37.417 "base_bdevs": [ 00:25:37.417 "malloc1", 00:25:37.417 "malloc2", 00:25:37.417 "malloc3", 00:25:37.417 "malloc4" 00:25:37.417 ], 00:25:37.417 "superblock": false, 00:25:37.417 "strip_size_kb": 64, 00:25:37.417 "method": "bdev_raid_create", 00:25:37.417 "req_id": 1 00:25:37.417 } 00:25:37.417 Got JSON-RPC error response 00:25:37.417 response: 00:25:37.417 { 00:25:37.417 "code": -17, 00:25:37.417 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:37.417 } 00:25:37.417 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:37.417 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:37.417 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:37.417 07:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:37.417 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.417 07:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:37.676 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:37.676 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:37.676 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:37.935 [2024-05-16 07:38:31.432249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:37.935 [2024-05-16 07:38:31.432311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.935 [2024-05-16 07:38:31.432341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c7180 00:25:37.935 [2024-05-16 07:38:31.432351] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.935 [2024-05-16 07:38:31.432865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.935 [2024-05-16 07:38:31.432898] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:37.935 [2024-05-16 07:38:31.432923] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:37.935 [2024-05-16 07:38:31.432934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:37.935 pt1 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.935 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.195 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:38.195 "name": "raid_bdev1", 00:25:38.195 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:38.195 "strip_size_kb": 64, 00:25:38.195 "state": "configuring", 00:25:38.195 "raid_level": "concat", 00:25:38.195 "superblock": true, 00:25:38.195 "num_base_bdevs": 4, 00:25:38.196 "num_base_bdevs_discovered": 1, 00:25:38.196 "num_base_bdevs_operational": 4, 00:25:38.196 "base_bdevs_list": [ 00:25:38.196 { 00:25:38.196 "name": "pt1", 00:25:38.196 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:38.196 "is_configured": true, 00:25:38.196 "data_offset": 2048, 00:25:38.196 "data_size": 63488 00:25:38.196 }, 00:25:38.196 { 00:25:38.196 "name": null, 00:25:38.196 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:38.196 "is_configured": false, 00:25:38.196 "data_offset": 2048, 00:25:38.196 "data_size": 63488 00:25:38.196 }, 00:25:38.196 { 00:25:38.196 "name": null, 00:25:38.196 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:38.196 "is_configured": false, 00:25:38.196 "data_offset": 2048, 00:25:38.196 "data_size": 63488 00:25:38.196 }, 00:25:38.196 { 00:25:38.196 "name": null, 00:25:38.196 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:38.196 "is_configured": false, 00:25:38.196 "data_offset": 2048, 00:25:38.196 "data_size": 63488 00:25:38.196 } 00:25:38.196 ] 00:25:38.196 }' 00:25:38.196 07:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:38.196 07:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.767 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:25:38.767 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:38.767 [2024-05-16 07:38:32.280270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:38.767 [2024-05-16 07:38:32.280357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:38.767 [2024-05-16 07:38:32.280399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c6780 00:25:38.767 [2024-05-16 07:38:32.280421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:38.767 [2024-05-16 07:38:32.280550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:38.767 [2024-05-16 07:38:32.280561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:38.767 [2024-05-16 07:38:32.280584] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:38.767 [2024-05-16 07:38:32.280592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:38.767 pt2 00:25:38.767 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:39.024 [2024-05-16 07:38:32.544260] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.024 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.283 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:39.283 "name": "raid_bdev1", 00:25:39.283 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:39.283 "strip_size_kb": 64, 00:25:39.283 "state": "configuring", 00:25:39.283 "raid_level": "concat", 00:25:39.283 "superblock": true, 00:25:39.283 "num_base_bdevs": 4, 00:25:39.283 "num_base_bdevs_discovered": 1, 00:25:39.283 "num_base_bdevs_operational": 4, 00:25:39.283 "base_bdevs_list": [ 00:25:39.283 { 00:25:39.283 "name": "pt1", 00:25:39.283 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:39.283 "is_configured": true, 00:25:39.283 "data_offset": 2048, 00:25:39.283 "data_size": 63488 00:25:39.283 }, 00:25:39.283 { 00:25:39.283 "name": null, 00:25:39.283 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:39.283 "is_configured": false, 00:25:39.283 "data_offset": 2048, 00:25:39.283 "data_size": 63488 00:25:39.283 }, 00:25:39.283 { 00:25:39.283 "name": null, 00:25:39.283 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:39.283 "is_configured": false, 00:25:39.283 "data_offset": 2048, 00:25:39.283 "data_size": 63488 00:25:39.283 }, 00:25:39.283 { 00:25:39.283 "name": null, 00:25:39.283 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:39.283 "is_configured": false, 00:25:39.283 "data_offset": 2048, 00:25:39.283 "data_size": 63488 00:25:39.283 } 00:25:39.283 ] 00:25:39.283 }' 00:25:39.283 07:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:39.283 07:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.541 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:39.541 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:39.541 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:39.798 [2024-05-16 07:38:33.256296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:39.798 [2024-05-16 07:38:33.256364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:39.798 [2024-05-16 07:38:33.256398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c6780 00:25:39.798 [2024-05-16 07:38:33.256414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:39.798 [2024-05-16 07:38:33.256545] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:39.798 [2024-05-16 07:38:33.256557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:39.798 [2024-05-16 07:38:33.256580] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:39.798 [2024-05-16 07:38:33.256597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:39.798 pt2 00:25:39.798 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:39.798 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:39.798 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:40.056 [2024-05-16 07:38:33.460279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:40.056 [2024-05-16 07:38:33.460348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.056 [2024-05-16 07:38:33.460383] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c7b80 00:25:40.056 [2024-05-16 07:38:33.460394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.056 [2024-05-16 07:38:33.460532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.056 [2024-05-16 07:38:33.460549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:40.056 [2024-05-16 07:38:33.460573] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:40.056 [2024-05-16 07:38:33.460583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:40.056 pt3 00:25:40.056 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:40.056 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:40.056 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:40.336 [2024-05-16 07:38:33.652273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:40.336 [2024-05-16 07:38:33.652335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:40.336 [2024-05-16 07:38:33.652369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c4c7900 00:25:40.336 [2024-05-16 07:38:33.652380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:40.336 [2024-05-16 07:38:33.652505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:40.336 [2024-05-16 07:38:33.652531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:40.336 [2024-05-16 07:38:33.652564] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:40.336 [2024-05-16 07:38:33.652572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:40.336 [2024-05-16 07:38:33.652600] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c4c6c80 00:25:40.336 [2024-05-16 07:38:33.652607] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:40.336 [2024-05-16 07:38:33.652640] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c529e20 00:25:40.336 [2024-05-16 07:38:33.652725] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c4c6c80 00:25:40.336 [2024-05-16 07:38:33.652733] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c4c6c80 00:25:40.336 [2024-05-16 07:38:33.652758] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.336 pt4 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:40.336 "name": "raid_bdev1", 00:25:40.336 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:40.336 "strip_size_kb": 64, 00:25:40.336 "state": "online", 00:25:40.336 "raid_level": "concat", 00:25:40.336 "superblock": true, 00:25:40.336 "num_base_bdevs": 4, 00:25:40.336 "num_base_bdevs_discovered": 4, 00:25:40.336 "num_base_bdevs_operational": 4, 00:25:40.336 "base_bdevs_list": [ 00:25:40.336 { 00:25:40.336 "name": "pt1", 00:25:40.336 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:40.336 "is_configured": true, 00:25:40.336 "data_offset": 2048, 00:25:40.336 "data_size": 63488 00:25:40.336 }, 00:25:40.336 { 00:25:40.336 "name": "pt2", 00:25:40.336 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:40.336 "is_configured": true, 00:25:40.336 "data_offset": 2048, 00:25:40.336 "data_size": 63488 00:25:40.336 }, 00:25:40.336 { 00:25:40.336 "name": "pt3", 00:25:40.336 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:40.336 "is_configured": true, 00:25:40.336 "data_offset": 2048, 00:25:40.336 "data_size": 63488 00:25:40.336 }, 00:25:40.336 { 00:25:40.336 "name": "pt4", 00:25:40.336 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:40.336 "is_configured": true, 00:25:40.336 "data_offset": 2048, 00:25:40.336 "data_size": 63488 00:25:40.336 } 00:25:40.336 ] 00:25:40.336 }' 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:40.336 07:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:40.598 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:40.856 [2024-05-16 07:38:34.388316] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:40.856 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:40.856 "name": "raid_bdev1", 00:25:40.856 "aliases": [ 00:25:40.856 "48300045-1357-11ef-8e8f-9dd684e56d79" 00:25:40.856 ], 00:25:40.856 "product_name": "Raid Volume", 00:25:40.856 "block_size": 512, 00:25:40.856 "num_blocks": 253952, 00:25:40.856 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:40.856 "assigned_rate_limits": { 00:25:40.856 "rw_ios_per_sec": 0, 00:25:40.856 "rw_mbytes_per_sec": 0, 00:25:40.856 "r_mbytes_per_sec": 0, 00:25:40.856 "w_mbytes_per_sec": 0 00:25:40.856 }, 00:25:40.856 "claimed": false, 00:25:40.856 "zoned": false, 00:25:40.856 "supported_io_types": { 00:25:40.856 "read": true, 00:25:40.856 "write": true, 00:25:40.856 "unmap": true, 00:25:40.856 "write_zeroes": true, 00:25:40.856 "flush": true, 00:25:40.856 "reset": true, 00:25:40.856 "compare": false, 00:25:40.856 "compare_and_write": false, 00:25:40.856 "abort": false, 00:25:40.856 "nvme_admin": false, 00:25:40.856 "nvme_io": false 00:25:40.856 }, 00:25:40.856 "memory_domains": [ 00:25:40.856 { 00:25:40.856 "dma_device_id": "system", 00:25:40.856 "dma_device_type": 1 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.856 "dma_device_type": 2 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "system", 00:25:40.856 "dma_device_type": 1 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.856 "dma_device_type": 2 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "system", 00:25:40.856 "dma_device_type": 1 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.856 "dma_device_type": 2 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "system", 00:25:40.856 "dma_device_type": 1 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.856 "dma_device_type": 2 00:25:40.856 } 00:25:40.856 ], 00:25:40.856 "driver_specific": { 00:25:40.856 "raid": { 00:25:40.856 "uuid": "48300045-1357-11ef-8e8f-9dd684e56d79", 00:25:40.856 "strip_size_kb": 64, 00:25:40.856 "state": "online", 00:25:40.856 "raid_level": "concat", 00:25:40.856 "superblock": true, 00:25:40.856 "num_base_bdevs": 4, 00:25:40.856 "num_base_bdevs_discovered": 4, 00:25:40.856 "num_base_bdevs_operational": 4, 00:25:40.856 "base_bdevs_list": [ 00:25:40.856 { 00:25:40.856 "name": "pt1", 00:25:40.856 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:40.856 "is_configured": true, 00:25:40.856 "data_offset": 2048, 00:25:40.856 "data_size": 63488 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "name": "pt2", 00:25:40.856 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:40.856 "is_configured": true, 00:25:40.856 "data_offset": 2048, 00:25:40.856 "data_size": 63488 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "name": "pt3", 00:25:40.856 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:40.856 "is_configured": true, 00:25:40.856 "data_offset": 2048, 00:25:40.856 "data_size": 63488 00:25:40.856 }, 00:25:40.856 { 00:25:40.856 "name": "pt4", 00:25:40.856 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:40.856 "is_configured": true, 00:25:40.856 "data_offset": 2048, 00:25:40.856 "data_size": 63488 00:25:40.856 } 00:25:40.856 ] 00:25:40.856 } 00:25:40.856 } 00:25:40.856 }' 00:25:40.856 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:41.115 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:25:41.115 pt2 00:25:41.115 pt3 00:25:41.115 pt4' 00:25:41.115 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:41.115 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:41.115 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:41.393 "name": "pt1", 00:25:41.393 "aliases": [ 00:25:41.393 "c1b48963-99e8-715e-9157-29ca7b236179" 00:25:41.393 ], 00:25:41.393 "product_name": "passthru", 00:25:41.393 "block_size": 512, 00:25:41.393 "num_blocks": 65536, 00:25:41.393 "uuid": "c1b48963-99e8-715e-9157-29ca7b236179", 00:25:41.393 "assigned_rate_limits": { 00:25:41.393 "rw_ios_per_sec": 0, 00:25:41.393 "rw_mbytes_per_sec": 0, 00:25:41.393 "r_mbytes_per_sec": 0, 00:25:41.393 "w_mbytes_per_sec": 0 00:25:41.393 }, 00:25:41.393 "claimed": true, 00:25:41.393 "claim_type": "exclusive_write", 00:25:41.393 "zoned": false, 00:25:41.393 "supported_io_types": { 00:25:41.393 "read": true, 00:25:41.393 "write": true, 00:25:41.393 "unmap": true, 00:25:41.393 "write_zeroes": true, 00:25:41.393 "flush": true, 00:25:41.393 "reset": true, 00:25:41.393 "compare": false, 00:25:41.393 "compare_and_write": false, 00:25:41.393 "abort": true, 00:25:41.393 "nvme_admin": false, 00:25:41.393 "nvme_io": false 00:25:41.393 }, 00:25:41.393 "memory_domains": [ 00:25:41.393 { 00:25:41.393 "dma_device_id": "system", 00:25:41.393 "dma_device_type": 1 00:25:41.393 }, 00:25:41.393 { 00:25:41.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.393 "dma_device_type": 2 00:25:41.393 } 00:25:41.393 ], 00:25:41.393 "driver_specific": { 00:25:41.393 "passthru": { 00:25:41.393 "name": "pt1", 00:25:41.393 "base_bdev_name": "malloc1" 00:25:41.393 } 00:25:41.393 } 00:25:41.393 }' 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:41.393 07:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:41.652 "name": "pt2", 00:25:41.652 "aliases": [ 00:25:41.652 "336ef110-9548-1956-b65d-f4759252e3e0" 00:25:41.652 ], 00:25:41.652 "product_name": "passthru", 00:25:41.652 "block_size": 512, 00:25:41.652 "num_blocks": 65536, 00:25:41.652 "uuid": "336ef110-9548-1956-b65d-f4759252e3e0", 00:25:41.652 "assigned_rate_limits": { 00:25:41.652 "rw_ios_per_sec": 0, 00:25:41.652 "rw_mbytes_per_sec": 0, 00:25:41.652 "r_mbytes_per_sec": 0, 00:25:41.652 "w_mbytes_per_sec": 0 00:25:41.652 }, 00:25:41.652 "claimed": true, 00:25:41.652 "claim_type": "exclusive_write", 00:25:41.652 "zoned": false, 00:25:41.652 "supported_io_types": { 00:25:41.652 "read": true, 00:25:41.652 "write": true, 00:25:41.652 "unmap": true, 00:25:41.652 "write_zeroes": true, 00:25:41.652 "flush": true, 00:25:41.652 "reset": true, 00:25:41.652 "compare": false, 00:25:41.652 "compare_and_write": false, 00:25:41.652 "abort": true, 00:25:41.652 "nvme_admin": false, 00:25:41.652 "nvme_io": false 00:25:41.652 }, 00:25:41.652 "memory_domains": [ 00:25:41.652 { 00:25:41.652 "dma_device_id": "system", 00:25:41.652 "dma_device_type": 1 00:25:41.652 }, 00:25:41.652 { 00:25:41.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.652 "dma_device_type": 2 00:25:41.652 } 00:25:41.652 ], 00:25:41.652 "driver_specific": { 00:25:41.652 "passthru": { 00:25:41.652 "name": "pt2", 00:25:41.652 "base_bdev_name": "malloc2" 00:25:41.652 } 00:25:41.652 } 00:25:41.652 }' 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:41.652 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:41.911 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:41.911 "name": "pt3", 00:25:41.911 "aliases": [ 00:25:41.911 "67944bba-e56d-1459-a4ee-109083ab9208" 00:25:41.911 ], 00:25:41.911 "product_name": "passthru", 00:25:41.911 "block_size": 512, 00:25:41.911 "num_blocks": 65536, 00:25:41.911 "uuid": "67944bba-e56d-1459-a4ee-109083ab9208", 00:25:41.911 "assigned_rate_limits": { 00:25:41.911 "rw_ios_per_sec": 0, 00:25:41.911 "rw_mbytes_per_sec": 0, 00:25:41.911 "r_mbytes_per_sec": 0, 00:25:41.911 "w_mbytes_per_sec": 0 00:25:41.911 }, 00:25:41.911 "claimed": true, 00:25:41.912 "claim_type": "exclusive_write", 00:25:41.912 "zoned": false, 00:25:41.912 "supported_io_types": { 00:25:41.912 "read": true, 00:25:41.912 "write": true, 00:25:41.912 "unmap": true, 00:25:41.912 "write_zeroes": true, 00:25:41.912 "flush": true, 00:25:41.912 "reset": true, 00:25:41.912 "compare": false, 00:25:41.912 "compare_and_write": false, 00:25:41.912 "abort": true, 00:25:41.912 "nvme_admin": false, 00:25:41.912 "nvme_io": false 00:25:41.912 }, 00:25:41.912 "memory_domains": [ 00:25:41.912 { 00:25:41.912 "dma_device_id": "system", 00:25:41.912 "dma_device_type": 1 00:25:41.912 }, 00:25:41.912 { 00:25:41.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.912 "dma_device_type": 2 00:25:41.912 } 00:25:41.912 ], 00:25:41.912 "driver_specific": { 00:25:41.912 "passthru": { 00:25:41.912 "name": "pt3", 00:25:41.912 "base_bdev_name": "malloc3" 00:25:41.912 } 00:25:41.912 } 00:25:41.912 }' 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:41.912 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:42.170 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:42.170 "name": "pt4", 00:25:42.170 "aliases": [ 00:25:42.170 "4f27db46-0105-5950-9fb6-c3fc59d53242" 00:25:42.170 ], 00:25:42.170 "product_name": "passthru", 00:25:42.170 "block_size": 512, 00:25:42.170 "num_blocks": 65536, 00:25:42.170 "uuid": "4f27db46-0105-5950-9fb6-c3fc59d53242", 00:25:42.170 "assigned_rate_limits": { 00:25:42.170 "rw_ios_per_sec": 0, 00:25:42.170 "rw_mbytes_per_sec": 0, 00:25:42.170 "r_mbytes_per_sec": 0, 00:25:42.170 "w_mbytes_per_sec": 0 00:25:42.170 }, 00:25:42.170 "claimed": true, 00:25:42.170 "claim_type": "exclusive_write", 00:25:42.170 "zoned": false, 00:25:42.170 "supported_io_types": { 00:25:42.171 "read": true, 00:25:42.171 "write": true, 00:25:42.171 "unmap": true, 00:25:42.171 "write_zeroes": true, 00:25:42.171 "flush": true, 00:25:42.171 "reset": true, 00:25:42.171 "compare": false, 00:25:42.171 "compare_and_write": false, 00:25:42.171 "abort": true, 00:25:42.171 "nvme_admin": false, 00:25:42.171 "nvme_io": false 00:25:42.171 }, 00:25:42.171 "memory_domains": [ 00:25:42.171 { 00:25:42.171 "dma_device_id": "system", 00:25:42.171 "dma_device_type": 1 00:25:42.171 }, 00:25:42.171 { 00:25:42.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.171 "dma_device_type": 2 00:25:42.171 } 00:25:42.171 ], 00:25:42.171 "driver_specific": { 00:25:42.171 "passthru": { 00:25:42.171 "name": "pt4", 00:25:42.171 "base_bdev_name": "malloc4" 00:25:42.171 } 00:25:42.171 } 00:25:42.171 }' 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:42.171 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:42.430 [2024-05-16 07:38:35.900286] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 48300045-1357-11ef-8e8f-9dd684e56d79 '!=' 48300045-1357-11ef-8e8f-9dd684e56d79 ']' 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@216 -- # return 1 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61299 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 61299 ']' 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 61299 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 61299 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:25:42.430 killing process with pid 61299 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61299' 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 61299 00:25:42.430 [2024-05-16 07:38:35.933865] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:42.430 [2024-05-16 07:38:35.933885] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.430 [2024-05-16 07:38:35.933901] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.430 [2024-05-16 07:38:35.933905] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4c6c80 name raid_bdev1, state offline 00:25:42.430 07:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 61299 00:25:42.430 [2024-05-16 07:38:35.953104] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:42.689 07:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:42.689 00:25:42.689 real 0m12.990s 00:25:42.689 user 0m23.113s 00:25:42.689 sys 0m2.091s 00:25:42.689 07:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:42.689 07:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.689 ************************************ 00:25:42.689 END TEST raid_superblock_test 00:25:42.689 ************************************ 00:25:42.689 07:38:36 bdev_raid -- bdev/bdev_raid.sh@802 -- # for level in raid0 concat raid1 00:25:42.689 07:38:36 bdev_raid -- bdev/bdev_raid.sh@803 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:42.689 07:38:36 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:42.689 07:38:36 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:42.689 07:38:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:42.689 ************************************ 00:25:42.689 START TEST raid_state_function_test 00:25:42.689 ************************************ 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local superblock=false 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@238 -- # '[' false = true ']' 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # superblock_create_arg= 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # raid_pid=61698 00:25:42.689 Process raid pid: 61698 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 61698' 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@247 -- # waitforlisten 61698 /var/tmp/spdk-raid.sock 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 61698 ']' 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:42.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:42.689 07:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.689 [2024-05-16 07:38:36.176327] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:25:42.689 [2024-05-16 07:38:36.176514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:25:43.255 EAL: TSC is not safe to use in SMP mode 00:25:43.255 EAL: TSC is not invariant 00:25:43.255 [2024-05-16 07:38:36.652557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.255 [2024-05-16 07:38:36.733850] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:25:43.255 [2024-05-16 07:38:36.735983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.255 [2024-05-16 07:38:36.736745] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:43.255 [2024-05-16 07:38:36.736760] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.187 07:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:44.187 07:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:25:44.187 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:44.446 [2024-05-16 07:38:37.791314] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:44.446 [2024-05-16 07:38:37.791367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:44.446 [2024-05-16 07:38:37.791373] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:44.446 [2024-05-16 07:38:37.791382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:44.446 [2024-05-16 07:38:37.791385] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:44.446 [2024-05-16 07:38:37.791392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:44.446 [2024-05-16 07:38:37.791396] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:44.446 [2024-05-16 07:38:37.791403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.446 07:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.705 07:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:44.705 "name": "Existed_Raid", 00:25:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.705 "strip_size_kb": 0, 00:25:44.705 "state": "configuring", 00:25:44.705 "raid_level": "raid1", 00:25:44.705 "superblock": false, 00:25:44.705 "num_base_bdevs": 4, 00:25:44.705 "num_base_bdevs_discovered": 0, 00:25:44.705 "num_base_bdevs_operational": 4, 00:25:44.705 "base_bdevs_list": [ 00:25:44.705 { 00:25:44.705 "name": "BaseBdev1", 00:25:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.705 "is_configured": false, 00:25:44.705 "data_offset": 0, 00:25:44.705 "data_size": 0 00:25:44.705 }, 00:25:44.705 { 00:25:44.705 "name": "BaseBdev2", 00:25:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.705 "is_configured": false, 00:25:44.705 "data_offset": 0, 00:25:44.705 "data_size": 0 00:25:44.705 }, 00:25:44.705 { 00:25:44.705 "name": "BaseBdev3", 00:25:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.705 "is_configured": false, 00:25:44.705 "data_offset": 0, 00:25:44.705 "data_size": 0 00:25:44.705 }, 00:25:44.705 { 00:25:44.705 "name": "BaseBdev4", 00:25:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.705 "is_configured": false, 00:25:44.705 "data_offset": 0, 00:25:44.705 "data_size": 0 00:25:44.705 } 00:25:44.705 ] 00:25:44.705 }' 00:25:44.705 07:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:44.705 07:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.271 07:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:45.271 [2024-05-16 07:38:38.703271] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:45.271 [2024-05-16 07:38:38.703299] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3b0500 name Existed_Raid, state configuring 00:25:45.271 07:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:45.529 [2024-05-16 07:38:38.971274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:45.529 [2024-05-16 07:38:38.971316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:45.529 [2024-05-16 07:38:38.971320] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:45.529 [2024-05-16 07:38:38.971326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:45.529 [2024-05-16 07:38:38.971329] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:45.529 [2024-05-16 07:38:38.971335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:45.529 [2024-05-16 07:38:38.971338] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:45.529 [2024-05-16 07:38:38.971344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:45.529 07:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:45.787 [2024-05-16 07:38:39.164152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:45.787 BaseBdev1 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:45.787 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:46.045 [ 00:25:46.045 { 00:25:46.045 "name": "BaseBdev1", 00:25:46.045 "aliases": [ 00:25:46.045 "4fad9485-1357-11ef-8e8f-9dd684e56d79" 00:25:46.045 ], 00:25:46.045 "product_name": "Malloc disk", 00:25:46.045 "block_size": 512, 00:25:46.045 "num_blocks": 65536, 00:25:46.045 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:46.045 "assigned_rate_limits": { 00:25:46.045 "rw_ios_per_sec": 0, 00:25:46.045 "rw_mbytes_per_sec": 0, 00:25:46.045 "r_mbytes_per_sec": 0, 00:25:46.045 "w_mbytes_per_sec": 0 00:25:46.045 }, 00:25:46.045 "claimed": true, 00:25:46.045 "claim_type": "exclusive_write", 00:25:46.045 "zoned": false, 00:25:46.045 "supported_io_types": { 00:25:46.045 "read": true, 00:25:46.045 "write": true, 00:25:46.045 "unmap": true, 00:25:46.045 "write_zeroes": true, 00:25:46.045 "flush": true, 00:25:46.045 "reset": true, 00:25:46.045 "compare": false, 00:25:46.045 "compare_and_write": false, 00:25:46.045 "abort": true, 00:25:46.045 "nvme_admin": false, 00:25:46.045 "nvme_io": false 00:25:46.045 }, 00:25:46.045 "memory_domains": [ 00:25:46.045 { 00:25:46.045 "dma_device_id": "system", 00:25:46.045 "dma_device_type": 1 00:25:46.045 }, 00:25:46.045 { 00:25:46.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.045 "dma_device_type": 2 00:25:46.045 } 00:25:46.045 ], 00:25:46.045 "driver_specific": {} 00:25:46.045 } 00:25:46.045 ] 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.045 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.304 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:46.304 "name": "Existed_Raid", 00:25:46.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.304 "strip_size_kb": 0, 00:25:46.304 "state": "configuring", 00:25:46.304 "raid_level": "raid1", 00:25:46.304 "superblock": false, 00:25:46.304 "num_base_bdevs": 4, 00:25:46.304 "num_base_bdevs_discovered": 1, 00:25:46.304 "num_base_bdevs_operational": 4, 00:25:46.304 "base_bdevs_list": [ 00:25:46.304 { 00:25:46.304 "name": "BaseBdev1", 00:25:46.304 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:46.304 "is_configured": true, 00:25:46.304 "data_offset": 0, 00:25:46.304 "data_size": 65536 00:25:46.304 }, 00:25:46.304 { 00:25:46.304 "name": "BaseBdev2", 00:25:46.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.304 "is_configured": false, 00:25:46.304 "data_offset": 0, 00:25:46.304 "data_size": 0 00:25:46.304 }, 00:25:46.304 { 00:25:46.304 "name": "BaseBdev3", 00:25:46.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.304 "is_configured": false, 00:25:46.304 "data_offset": 0, 00:25:46.304 "data_size": 0 00:25:46.304 }, 00:25:46.304 { 00:25:46.304 "name": "BaseBdev4", 00:25:46.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.304 "is_configured": false, 00:25:46.304 "data_offset": 0, 00:25:46.304 "data_size": 0 00:25:46.304 } 00:25:46.304 ] 00:25:46.304 }' 00:25:46.304 07:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:46.304 07:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.562 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:46.819 [2024-05-16 07:38:40.267273] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:46.819 [2024-05-16 07:38:40.267303] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3b0500 name Existed_Raid, state configuring 00:25:46.819 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:47.076 [2024-05-16 07:38:40.527283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:47.076 [2024-05-16 07:38:40.527990] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:47.076 [2024-05-16 07:38:40.528032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:47.076 [2024-05-16 07:38:40.528037] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:47.076 [2024-05-16 07:38:40.528044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:47.076 [2024-05-16 07:38:40.528048] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:47.076 [2024-05-16 07:38:40.528054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.076 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.334 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.334 "name": "Existed_Raid", 00:25:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.334 "strip_size_kb": 0, 00:25:47.334 "state": "configuring", 00:25:47.334 "raid_level": "raid1", 00:25:47.334 "superblock": false, 00:25:47.334 "num_base_bdevs": 4, 00:25:47.334 "num_base_bdevs_discovered": 1, 00:25:47.334 "num_base_bdevs_operational": 4, 00:25:47.334 "base_bdevs_list": [ 00:25:47.334 { 00:25:47.334 "name": "BaseBdev1", 00:25:47.334 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:47.334 "is_configured": true, 00:25:47.334 "data_offset": 0, 00:25:47.334 "data_size": 65536 00:25:47.334 }, 00:25:47.334 { 00:25:47.334 "name": "BaseBdev2", 00:25:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.334 "is_configured": false, 00:25:47.334 "data_offset": 0, 00:25:47.334 "data_size": 0 00:25:47.334 }, 00:25:47.334 { 00:25:47.334 "name": "BaseBdev3", 00:25:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.334 "is_configured": false, 00:25:47.334 "data_offset": 0, 00:25:47.334 "data_size": 0 00:25:47.334 }, 00:25:47.334 { 00:25:47.334 "name": "BaseBdev4", 00:25:47.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.334 "is_configured": false, 00:25:47.334 "data_offset": 0, 00:25:47.334 "data_size": 0 00:25:47.334 } 00:25:47.334 ] 00:25:47.334 }' 00:25:47.334 07:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.334 07:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.593 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:47.850 [2024-05-16 07:38:41.279459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:47.850 BaseBdev2 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:47.850 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:48.109 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:48.109 [ 00:25:48.109 { 00:25:48.109 "name": "BaseBdev2", 00:25:48.109 "aliases": [ 00:25:48.109 "50f076dc-1357-11ef-8e8f-9dd684e56d79" 00:25:48.109 ], 00:25:48.109 "product_name": "Malloc disk", 00:25:48.109 "block_size": 512, 00:25:48.109 "num_blocks": 65536, 00:25:48.109 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:48.109 "assigned_rate_limits": { 00:25:48.109 "rw_ios_per_sec": 0, 00:25:48.109 "rw_mbytes_per_sec": 0, 00:25:48.109 "r_mbytes_per_sec": 0, 00:25:48.109 "w_mbytes_per_sec": 0 00:25:48.109 }, 00:25:48.109 "claimed": true, 00:25:48.109 "claim_type": "exclusive_write", 00:25:48.109 "zoned": false, 00:25:48.109 "supported_io_types": { 00:25:48.109 "read": true, 00:25:48.109 "write": true, 00:25:48.109 "unmap": true, 00:25:48.109 "write_zeroes": true, 00:25:48.109 "flush": true, 00:25:48.109 "reset": true, 00:25:48.109 "compare": false, 00:25:48.109 "compare_and_write": false, 00:25:48.109 "abort": true, 00:25:48.109 "nvme_admin": false, 00:25:48.109 "nvme_io": false 00:25:48.109 }, 00:25:48.109 "memory_domains": [ 00:25:48.109 { 00:25:48.109 "dma_device_id": "system", 00:25:48.109 "dma_device_type": 1 00:25:48.109 }, 00:25:48.109 { 00:25:48.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.109 "dma_device_type": 2 00:25:48.109 } 00:25:48.109 ], 00:25:48.109 "driver_specific": {} 00:25:48.109 } 00:25:48.109 ] 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:48.368 "name": "Existed_Raid", 00:25:48.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.368 "strip_size_kb": 0, 00:25:48.368 "state": "configuring", 00:25:48.368 "raid_level": "raid1", 00:25:48.368 "superblock": false, 00:25:48.368 "num_base_bdevs": 4, 00:25:48.368 "num_base_bdevs_discovered": 2, 00:25:48.368 "num_base_bdevs_operational": 4, 00:25:48.368 "base_bdevs_list": [ 00:25:48.368 { 00:25:48.368 "name": "BaseBdev1", 00:25:48.368 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:48.368 "is_configured": true, 00:25:48.368 "data_offset": 0, 00:25:48.368 "data_size": 65536 00:25:48.368 }, 00:25:48.368 { 00:25:48.368 "name": "BaseBdev2", 00:25:48.368 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:48.368 "is_configured": true, 00:25:48.368 "data_offset": 0, 00:25:48.368 "data_size": 65536 00:25:48.368 }, 00:25:48.368 { 00:25:48.368 "name": "BaseBdev3", 00:25:48.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.368 "is_configured": false, 00:25:48.368 "data_offset": 0, 00:25:48.368 "data_size": 0 00:25:48.368 }, 00:25:48.368 { 00:25:48.368 "name": "BaseBdev4", 00:25:48.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.368 "is_configured": false, 00:25:48.368 "data_offset": 0, 00:25:48.368 "data_size": 0 00:25:48.368 } 00:25:48.368 ] 00:25:48.368 }' 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:48.368 07:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.627 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:48.885 [2024-05-16 07:38:42.331495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:48.885 BaseBdev3 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:48.885 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.143 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:49.402 [ 00:25:49.402 { 00:25:49.402 "name": "BaseBdev3", 00:25:49.402 "aliases": [ 00:25:49.402 "5190ff33-1357-11ef-8e8f-9dd684e56d79" 00:25:49.402 ], 00:25:49.402 "product_name": "Malloc disk", 00:25:49.402 "block_size": 512, 00:25:49.402 "num_blocks": 65536, 00:25:49.402 "uuid": "5190ff33-1357-11ef-8e8f-9dd684e56d79", 00:25:49.402 "assigned_rate_limits": { 00:25:49.402 "rw_ios_per_sec": 0, 00:25:49.402 "rw_mbytes_per_sec": 0, 00:25:49.402 "r_mbytes_per_sec": 0, 00:25:49.402 "w_mbytes_per_sec": 0 00:25:49.402 }, 00:25:49.402 "claimed": true, 00:25:49.402 "claim_type": "exclusive_write", 00:25:49.402 "zoned": false, 00:25:49.402 "supported_io_types": { 00:25:49.402 "read": true, 00:25:49.402 "write": true, 00:25:49.402 "unmap": true, 00:25:49.402 "write_zeroes": true, 00:25:49.402 "flush": true, 00:25:49.402 "reset": true, 00:25:49.402 "compare": false, 00:25:49.402 "compare_and_write": false, 00:25:49.402 "abort": true, 00:25:49.402 "nvme_admin": false, 00:25:49.402 "nvme_io": false 00:25:49.402 }, 00:25:49.402 "memory_domains": [ 00:25:49.402 { 00:25:49.402 "dma_device_id": "system", 00:25:49.402 "dma_device_type": 1 00:25:49.402 }, 00:25:49.402 { 00:25:49.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.402 "dma_device_type": 2 00:25:49.402 } 00:25:49.402 ], 00:25:49.402 "driver_specific": {} 00:25:49.402 } 00:25:49.402 ] 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.402 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.660 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:49.660 "name": "Existed_Raid", 00:25:49.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.660 "strip_size_kb": 0, 00:25:49.660 "state": "configuring", 00:25:49.660 "raid_level": "raid1", 00:25:49.660 "superblock": false, 00:25:49.660 "num_base_bdevs": 4, 00:25:49.660 "num_base_bdevs_discovered": 3, 00:25:49.660 "num_base_bdevs_operational": 4, 00:25:49.660 "base_bdevs_list": [ 00:25:49.660 { 00:25:49.660 "name": "BaseBdev1", 00:25:49.660 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:49.660 "is_configured": true, 00:25:49.660 "data_offset": 0, 00:25:49.660 "data_size": 65536 00:25:49.660 }, 00:25:49.660 { 00:25:49.660 "name": "BaseBdev2", 00:25:49.660 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:49.660 "is_configured": true, 00:25:49.660 "data_offset": 0, 00:25:49.660 "data_size": 65536 00:25:49.660 }, 00:25:49.660 { 00:25:49.660 "name": "BaseBdev3", 00:25:49.660 "uuid": "5190ff33-1357-11ef-8e8f-9dd684e56d79", 00:25:49.660 "is_configured": true, 00:25:49.660 "data_offset": 0, 00:25:49.660 "data_size": 65536 00:25:49.660 }, 00:25:49.660 { 00:25:49.660 "name": "BaseBdev4", 00:25:49.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.660 "is_configured": false, 00:25:49.660 "data_offset": 0, 00:25:49.660 "data_size": 0 00:25:49.660 } 00:25:49.660 ] 00:25:49.660 }' 00:25:49.660 07:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:49.660 07:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.919 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:50.178 [2024-05-16 07:38:43.515483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:50.178 [2024-05-16 07:38:43.515507] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b3b0a00 00:25:50.178 [2024-05-16 07:38:43.515510] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:50.178 [2024-05-16 07:38:43.515551] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b413ec0 00:25:50.178 [2024-05-16 07:38:43.515629] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b3b0a00 00:25:50.178 [2024-05-16 07:38:43.515633] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b3b0a00 00:25:50.178 [2024-05-16 07:38:43.515658] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.178 BaseBdev4 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:50.178 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.438 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:50.438 [ 00:25:50.438 { 00:25:50.438 "name": "BaseBdev4", 00:25:50.438 "aliases": [ 00:25:50.438 "5245a89d-1357-11ef-8e8f-9dd684e56d79" 00:25:50.438 ], 00:25:50.438 "product_name": "Malloc disk", 00:25:50.438 "block_size": 512, 00:25:50.438 "num_blocks": 65536, 00:25:50.438 "uuid": "5245a89d-1357-11ef-8e8f-9dd684e56d79", 00:25:50.438 "assigned_rate_limits": { 00:25:50.438 "rw_ios_per_sec": 0, 00:25:50.438 "rw_mbytes_per_sec": 0, 00:25:50.438 "r_mbytes_per_sec": 0, 00:25:50.438 "w_mbytes_per_sec": 0 00:25:50.438 }, 00:25:50.438 "claimed": true, 00:25:50.438 "claim_type": "exclusive_write", 00:25:50.438 "zoned": false, 00:25:50.438 "supported_io_types": { 00:25:50.438 "read": true, 00:25:50.438 "write": true, 00:25:50.438 "unmap": true, 00:25:50.438 "write_zeroes": true, 00:25:50.438 "flush": true, 00:25:50.438 "reset": true, 00:25:50.438 "compare": false, 00:25:50.438 "compare_and_write": false, 00:25:50.438 "abort": true, 00:25:50.438 "nvme_admin": false, 00:25:50.438 "nvme_io": false 00:25:50.438 }, 00:25:50.438 "memory_domains": [ 00:25:50.438 { 00:25:50.438 "dma_device_id": "system", 00:25:50.438 "dma_device_type": 1 00:25:50.438 }, 00:25:50.438 { 00:25:50.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.438 "dma_device_type": 2 00:25:50.438 } 00:25:50.438 ], 00:25:50.438 "driver_specific": {} 00:25:50.438 } 00:25:50.438 ] 00:25:50.438 07:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:50.438 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:25:50.438 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:25:50.438 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:50.438 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.439 07:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.698 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.698 "name": "Existed_Raid", 00:25:50.698 "uuid": "5245ad0a-1357-11ef-8e8f-9dd684e56d79", 00:25:50.698 "strip_size_kb": 0, 00:25:50.698 "state": "online", 00:25:50.698 "raid_level": "raid1", 00:25:50.698 "superblock": false, 00:25:50.698 "num_base_bdevs": 4, 00:25:50.698 "num_base_bdevs_discovered": 4, 00:25:50.698 "num_base_bdevs_operational": 4, 00:25:50.698 "base_bdevs_list": [ 00:25:50.698 { 00:25:50.698 "name": "BaseBdev1", 00:25:50.698 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:50.698 "is_configured": true, 00:25:50.698 "data_offset": 0, 00:25:50.698 "data_size": 65536 00:25:50.698 }, 00:25:50.698 { 00:25:50.698 "name": "BaseBdev2", 00:25:50.698 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:50.698 "is_configured": true, 00:25:50.698 "data_offset": 0, 00:25:50.698 "data_size": 65536 00:25:50.698 }, 00:25:50.698 { 00:25:50.698 "name": "BaseBdev3", 00:25:50.698 "uuid": "5190ff33-1357-11ef-8e8f-9dd684e56d79", 00:25:50.698 "is_configured": true, 00:25:50.698 "data_offset": 0, 00:25:50.698 "data_size": 65536 00:25:50.698 }, 00:25:50.698 { 00:25:50.698 "name": "BaseBdev4", 00:25:50.698 "uuid": "5245a89d-1357-11ef-8e8f-9dd684e56d79", 00:25:50.698 "is_configured": true, 00:25:50.698 "data_offset": 0, 00:25:50.698 "data_size": 65536 00:25:50.698 } 00:25:50.698 ] 00:25:50.698 }' 00:25:50.698 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.698 07:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:25:51.284 [2024-05-16 07:38:44.723448] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:51.284 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:25:51.284 "name": "Existed_Raid", 00:25:51.284 "aliases": [ 00:25:51.284 "5245ad0a-1357-11ef-8e8f-9dd684e56d79" 00:25:51.284 ], 00:25:51.284 "product_name": "Raid Volume", 00:25:51.284 "block_size": 512, 00:25:51.284 "num_blocks": 65536, 00:25:51.284 "uuid": "5245ad0a-1357-11ef-8e8f-9dd684e56d79", 00:25:51.284 "assigned_rate_limits": { 00:25:51.284 "rw_ios_per_sec": 0, 00:25:51.284 "rw_mbytes_per_sec": 0, 00:25:51.284 "r_mbytes_per_sec": 0, 00:25:51.284 "w_mbytes_per_sec": 0 00:25:51.284 }, 00:25:51.284 "claimed": false, 00:25:51.284 "zoned": false, 00:25:51.284 "supported_io_types": { 00:25:51.284 "read": true, 00:25:51.284 "write": true, 00:25:51.284 "unmap": false, 00:25:51.284 "write_zeroes": true, 00:25:51.284 "flush": false, 00:25:51.284 "reset": true, 00:25:51.284 "compare": false, 00:25:51.284 "compare_and_write": false, 00:25:51.284 "abort": false, 00:25:51.284 "nvme_admin": false, 00:25:51.284 "nvme_io": false 00:25:51.284 }, 00:25:51.284 "memory_domains": [ 00:25:51.284 { 00:25:51.284 "dma_device_id": "system", 00:25:51.284 "dma_device_type": 1 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.284 "dma_device_type": 2 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "system", 00:25:51.284 "dma_device_type": 1 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.284 "dma_device_type": 2 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "system", 00:25:51.284 "dma_device_type": 1 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.284 "dma_device_type": 2 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "system", 00:25:51.284 "dma_device_type": 1 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.284 "dma_device_type": 2 00:25:51.284 } 00:25:51.284 ], 00:25:51.284 "driver_specific": { 00:25:51.284 "raid": { 00:25:51.284 "uuid": "5245ad0a-1357-11ef-8e8f-9dd684e56d79", 00:25:51.284 "strip_size_kb": 0, 00:25:51.284 "state": "online", 00:25:51.284 "raid_level": "raid1", 00:25:51.284 "superblock": false, 00:25:51.284 "num_base_bdevs": 4, 00:25:51.284 "num_base_bdevs_discovered": 4, 00:25:51.284 "num_base_bdevs_operational": 4, 00:25:51.284 "base_bdevs_list": [ 00:25:51.284 { 00:25:51.284 "name": "BaseBdev1", 00:25:51.284 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:51.284 "is_configured": true, 00:25:51.284 "data_offset": 0, 00:25:51.284 "data_size": 65536 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "name": "BaseBdev2", 00:25:51.284 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:51.284 "is_configured": true, 00:25:51.284 "data_offset": 0, 00:25:51.284 "data_size": 65536 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "name": "BaseBdev3", 00:25:51.284 "uuid": "5190ff33-1357-11ef-8e8f-9dd684e56d79", 00:25:51.284 "is_configured": true, 00:25:51.284 "data_offset": 0, 00:25:51.284 "data_size": 65536 00:25:51.284 }, 00:25:51.284 { 00:25:51.284 "name": "BaseBdev4", 00:25:51.284 "uuid": "5245a89d-1357-11ef-8e8f-9dd684e56d79", 00:25:51.284 "is_configured": true, 00:25:51.284 "data_offset": 0, 00:25:51.284 "data_size": 65536 00:25:51.284 } 00:25:51.284 ] 00:25:51.284 } 00:25:51.284 } 00:25:51.284 }' 00:25:51.285 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:51.285 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:25:51.285 BaseBdev2 00:25:51.285 BaseBdev3 00:25:51.285 BaseBdev4' 00:25:51.285 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:51.285 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:51.285 07:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:51.543 "name": "BaseBdev1", 00:25:51.543 "aliases": [ 00:25:51.543 "4fad9485-1357-11ef-8e8f-9dd684e56d79" 00:25:51.543 ], 00:25:51.543 "product_name": "Malloc disk", 00:25:51.543 "block_size": 512, 00:25:51.543 "num_blocks": 65536, 00:25:51.543 "uuid": "4fad9485-1357-11ef-8e8f-9dd684e56d79", 00:25:51.543 "assigned_rate_limits": { 00:25:51.543 "rw_ios_per_sec": 0, 00:25:51.543 "rw_mbytes_per_sec": 0, 00:25:51.543 "r_mbytes_per_sec": 0, 00:25:51.543 "w_mbytes_per_sec": 0 00:25:51.543 }, 00:25:51.543 "claimed": true, 00:25:51.543 "claim_type": "exclusive_write", 00:25:51.543 "zoned": false, 00:25:51.543 "supported_io_types": { 00:25:51.543 "read": true, 00:25:51.543 "write": true, 00:25:51.543 "unmap": true, 00:25:51.543 "write_zeroes": true, 00:25:51.543 "flush": true, 00:25:51.543 "reset": true, 00:25:51.543 "compare": false, 00:25:51.543 "compare_and_write": false, 00:25:51.543 "abort": true, 00:25:51.543 "nvme_admin": false, 00:25:51.543 "nvme_io": false 00:25:51.543 }, 00:25:51.543 "memory_domains": [ 00:25:51.543 { 00:25:51.543 "dma_device_id": "system", 00:25:51.543 "dma_device_type": 1 00:25:51.543 }, 00:25:51.543 { 00:25:51.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.543 "dma_device_type": 2 00:25:51.543 } 00:25:51.543 ], 00:25:51.543 "driver_specific": {} 00:25:51.543 }' 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:51.543 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:51.801 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:51.801 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:51.802 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:51.802 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:52.203 "name": "BaseBdev2", 00:25:52.203 "aliases": [ 00:25:52.203 "50f076dc-1357-11ef-8e8f-9dd684e56d79" 00:25:52.203 ], 00:25:52.203 "product_name": "Malloc disk", 00:25:52.203 "block_size": 512, 00:25:52.203 "num_blocks": 65536, 00:25:52.203 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:52.203 "assigned_rate_limits": { 00:25:52.203 "rw_ios_per_sec": 0, 00:25:52.203 "rw_mbytes_per_sec": 0, 00:25:52.203 "r_mbytes_per_sec": 0, 00:25:52.203 "w_mbytes_per_sec": 0 00:25:52.203 }, 00:25:52.203 "claimed": true, 00:25:52.203 "claim_type": "exclusive_write", 00:25:52.203 "zoned": false, 00:25:52.203 "supported_io_types": { 00:25:52.203 "read": true, 00:25:52.203 "write": true, 00:25:52.203 "unmap": true, 00:25:52.203 "write_zeroes": true, 00:25:52.203 "flush": true, 00:25:52.203 "reset": true, 00:25:52.203 "compare": false, 00:25:52.203 "compare_and_write": false, 00:25:52.203 "abort": true, 00:25:52.203 "nvme_admin": false, 00:25:52.203 "nvme_io": false 00:25:52.203 }, 00:25:52.203 "memory_domains": [ 00:25:52.203 { 00:25:52.203 "dma_device_id": "system", 00:25:52.203 "dma_device_type": 1 00:25:52.203 }, 00:25:52.203 { 00:25:52.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.203 "dma_device_type": 2 00:25:52.203 } 00:25:52.203 ], 00:25:52.203 "driver_specific": {} 00:25:52.203 }' 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:52.203 "name": "BaseBdev3", 00:25:52.203 "aliases": [ 00:25:52.203 "5190ff33-1357-11ef-8e8f-9dd684e56d79" 00:25:52.203 ], 00:25:52.203 "product_name": "Malloc disk", 00:25:52.203 "block_size": 512, 00:25:52.203 "num_blocks": 65536, 00:25:52.203 "uuid": "5190ff33-1357-11ef-8e8f-9dd684e56d79", 00:25:52.203 "assigned_rate_limits": { 00:25:52.203 "rw_ios_per_sec": 0, 00:25:52.203 "rw_mbytes_per_sec": 0, 00:25:52.203 "r_mbytes_per_sec": 0, 00:25:52.203 "w_mbytes_per_sec": 0 00:25:52.203 }, 00:25:52.203 "claimed": true, 00:25:52.203 "claim_type": "exclusive_write", 00:25:52.203 "zoned": false, 00:25:52.203 "supported_io_types": { 00:25:52.203 "read": true, 00:25:52.203 "write": true, 00:25:52.203 "unmap": true, 00:25:52.203 "write_zeroes": true, 00:25:52.203 "flush": true, 00:25:52.203 "reset": true, 00:25:52.203 "compare": false, 00:25:52.203 "compare_and_write": false, 00:25:52.203 "abort": true, 00:25:52.203 "nvme_admin": false, 00:25:52.203 "nvme_io": false 00:25:52.203 }, 00:25:52.203 "memory_domains": [ 00:25:52.203 { 00:25:52.203 "dma_device_id": "system", 00:25:52.203 "dma_device_type": 1 00:25:52.203 }, 00:25:52.203 { 00:25:52.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.203 "dma_device_type": 2 00:25:52.203 } 00:25:52.203 ], 00:25:52.203 "driver_specific": {} 00:25:52.203 }' 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:52.203 07:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:25:52.769 "name": "BaseBdev4", 00:25:52.769 "aliases": [ 00:25:52.769 "5245a89d-1357-11ef-8e8f-9dd684e56d79" 00:25:52.769 ], 00:25:52.769 "product_name": "Malloc disk", 00:25:52.769 "block_size": 512, 00:25:52.769 "num_blocks": 65536, 00:25:52.769 "uuid": "5245a89d-1357-11ef-8e8f-9dd684e56d79", 00:25:52.769 "assigned_rate_limits": { 00:25:52.769 "rw_ios_per_sec": 0, 00:25:52.769 "rw_mbytes_per_sec": 0, 00:25:52.769 "r_mbytes_per_sec": 0, 00:25:52.769 "w_mbytes_per_sec": 0 00:25:52.769 }, 00:25:52.769 "claimed": true, 00:25:52.769 "claim_type": "exclusive_write", 00:25:52.769 "zoned": false, 00:25:52.769 "supported_io_types": { 00:25:52.769 "read": true, 00:25:52.769 "write": true, 00:25:52.769 "unmap": true, 00:25:52.769 "write_zeroes": true, 00:25:52.769 "flush": true, 00:25:52.769 "reset": true, 00:25:52.769 "compare": false, 00:25:52.769 "compare_and_write": false, 00:25:52.769 "abort": true, 00:25:52.769 "nvme_admin": false, 00:25:52.769 "nvme_io": false 00:25:52.769 }, 00:25:52.769 "memory_domains": [ 00:25:52.769 { 00:25:52.769 "dma_device_id": "system", 00:25:52.769 "dma_device_type": 1 00:25:52.769 }, 00:25:52.769 { 00:25:52.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.769 "dma_device_type": 2 00:25:52.769 } 00:25:52.769 ], 00:25:52.769 "driver_specific": {} 00:25:52.769 }' 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:52.769 [2024-05-16 07:38:46.279420] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # local expected_state 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 0 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.769 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.026 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.026 "name": "Existed_Raid", 00:25:53.026 "uuid": "5245ad0a-1357-11ef-8e8f-9dd684e56d79", 00:25:53.026 "strip_size_kb": 0, 00:25:53.026 "state": "online", 00:25:53.026 "raid_level": "raid1", 00:25:53.026 "superblock": false, 00:25:53.026 "num_base_bdevs": 4, 00:25:53.026 "num_base_bdevs_discovered": 3, 00:25:53.026 "num_base_bdevs_operational": 3, 00:25:53.026 "base_bdevs_list": [ 00:25:53.026 { 00:25:53.026 "name": null, 00:25:53.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.026 "is_configured": false, 00:25:53.026 "data_offset": 0, 00:25:53.026 "data_size": 65536 00:25:53.026 }, 00:25:53.026 { 00:25:53.026 "name": "BaseBdev2", 00:25:53.026 "uuid": "50f076dc-1357-11ef-8e8f-9dd684e56d79", 00:25:53.026 "is_configured": true, 00:25:53.026 "data_offset": 0, 00:25:53.026 "data_size": 65536 00:25:53.026 }, 00:25:53.026 { 00:25:53.026 "name": "BaseBdev3", 00:25:53.026 "uuid": "5190ff33-1357-11ef-8e8f-9dd684e56d79", 00:25:53.026 "is_configured": true, 00:25:53.026 "data_offset": 0, 00:25:53.026 "data_size": 65536 00:25:53.026 }, 00:25:53.026 { 00:25:53.026 "name": "BaseBdev4", 00:25:53.026 "uuid": "5245a89d-1357-11ef-8e8f-9dd684e56d79", 00:25:53.026 "is_configured": true, 00:25:53.026 "data_offset": 0, 00:25:53.026 "data_size": 65536 00:25:53.026 } 00:25:53.026 ] 00:25:53.026 }' 00:25:53.026 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.026 07:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.649 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:53.649 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:53.649 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:53.649 07:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.649 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:53.649 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.649 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:53.907 [2024-05-16 07:38:47.432155] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:53.907 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:53.907 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:53.907 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.907 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:54.472 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:54.472 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:54.472 07:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:54.472 [2024-05-16 07:38:47.984903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:54.472 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.472 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.472 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.472 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:25:54.730 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:25:54.730 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:54.730 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:54.987 [2024-05-16 07:38:48.461660] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:54.987 [2024-05-16 07:38:48.461691] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.987 [2024-05-16 07:38:48.466504] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.987 [2024-05-16 07:38:48.466522] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.987 [2024-05-16 07:38:48.466527] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3b0a00 name Existed_Raid, state offline 00:25:54.987 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.987 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.988 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.988 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:25:55.247 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:25:55.247 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:25:55.247 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:25:55.247 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:25:55.247 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:55.247 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:55.505 BaseBdev2 00:25:55.505 07:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:25:55.505 07:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:55.505 07:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:55.505 07:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:55.505 07:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:55.505 07:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:55.506 07:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:55.763 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:56.021 [ 00:25:56.021 { 00:25:56.021 "name": "BaseBdev2", 00:25:56.021 "aliases": [ 00:25:56.021 "558397c8-1357-11ef-8e8f-9dd684e56d79" 00:25:56.021 ], 00:25:56.021 "product_name": "Malloc disk", 00:25:56.021 "block_size": 512, 00:25:56.021 "num_blocks": 65536, 00:25:56.021 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:25:56.021 "assigned_rate_limits": { 00:25:56.021 "rw_ios_per_sec": 0, 00:25:56.021 "rw_mbytes_per_sec": 0, 00:25:56.021 "r_mbytes_per_sec": 0, 00:25:56.021 "w_mbytes_per_sec": 0 00:25:56.021 }, 00:25:56.021 "claimed": false, 00:25:56.021 "zoned": false, 00:25:56.021 "supported_io_types": { 00:25:56.021 "read": true, 00:25:56.021 "write": true, 00:25:56.021 "unmap": true, 00:25:56.021 "write_zeroes": true, 00:25:56.021 "flush": true, 00:25:56.021 "reset": true, 00:25:56.021 "compare": false, 00:25:56.021 "compare_and_write": false, 00:25:56.021 "abort": true, 00:25:56.021 "nvme_admin": false, 00:25:56.021 "nvme_io": false 00:25:56.021 }, 00:25:56.021 "memory_domains": [ 00:25:56.021 { 00:25:56.021 "dma_device_id": "system", 00:25:56.021 "dma_device_type": 1 00:25:56.021 }, 00:25:56.021 { 00:25:56.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.021 "dma_device_type": 2 00:25:56.021 } 00:25:56.021 ], 00:25:56.021 "driver_specific": {} 00:25:56.021 } 00:25:56.021 ] 00:25:56.021 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:56.021 07:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:25:56.021 07:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:56.021 07:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:56.279 BaseBdev3 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:56.279 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:56.536 07:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:56.794 [ 00:25:56.794 { 00:25:56.794 "name": "BaseBdev3", 00:25:56.794 "aliases": [ 00:25:56.794 "55f6f309-1357-11ef-8e8f-9dd684e56d79" 00:25:56.794 ], 00:25:56.794 "product_name": "Malloc disk", 00:25:56.794 "block_size": 512, 00:25:56.794 "num_blocks": 65536, 00:25:56.794 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:25:56.794 "assigned_rate_limits": { 00:25:56.794 "rw_ios_per_sec": 0, 00:25:56.794 "rw_mbytes_per_sec": 0, 00:25:56.794 "r_mbytes_per_sec": 0, 00:25:56.794 "w_mbytes_per_sec": 0 00:25:56.794 }, 00:25:56.794 "claimed": false, 00:25:56.794 "zoned": false, 00:25:56.794 "supported_io_types": { 00:25:56.794 "read": true, 00:25:56.794 "write": true, 00:25:56.794 "unmap": true, 00:25:56.794 "write_zeroes": true, 00:25:56.794 "flush": true, 00:25:56.794 "reset": true, 00:25:56.794 "compare": false, 00:25:56.794 "compare_and_write": false, 00:25:56.794 "abort": true, 00:25:56.794 "nvme_admin": false, 00:25:56.794 "nvme_io": false 00:25:56.794 }, 00:25:56.794 "memory_domains": [ 00:25:56.794 { 00:25:56.794 "dma_device_id": "system", 00:25:56.794 "dma_device_type": 1 00:25:56.794 }, 00:25:56.794 { 00:25:56.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.794 "dma_device_type": 2 00:25:56.794 } 00:25:56.794 ], 00:25:56.795 "driver_specific": {} 00:25:56.795 } 00:25:56.795 ] 00:25:56.795 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:56.795 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:25:56.795 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:56.795 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:57.052 BaseBdev4 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:57.052 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:57.310 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:57.310 [ 00:25:57.310 { 00:25:57.310 "name": "BaseBdev4", 00:25:57.310 "aliases": [ 00:25:57.310 "566433fc-1357-11ef-8e8f-9dd684e56d79" 00:25:57.310 ], 00:25:57.310 "product_name": "Malloc disk", 00:25:57.310 "block_size": 512, 00:25:57.310 "num_blocks": 65536, 00:25:57.310 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:25:57.310 "assigned_rate_limits": { 00:25:57.310 "rw_ios_per_sec": 0, 00:25:57.310 "rw_mbytes_per_sec": 0, 00:25:57.310 "r_mbytes_per_sec": 0, 00:25:57.310 "w_mbytes_per_sec": 0 00:25:57.310 }, 00:25:57.310 "claimed": false, 00:25:57.310 "zoned": false, 00:25:57.310 "supported_io_types": { 00:25:57.310 "read": true, 00:25:57.310 "write": true, 00:25:57.310 "unmap": true, 00:25:57.310 "write_zeroes": true, 00:25:57.310 "flush": true, 00:25:57.310 "reset": true, 00:25:57.310 "compare": false, 00:25:57.310 "compare_and_write": false, 00:25:57.310 "abort": true, 00:25:57.310 "nvme_admin": false, 00:25:57.310 "nvme_io": false 00:25:57.310 }, 00:25:57.310 "memory_domains": [ 00:25:57.310 { 00:25:57.310 "dma_device_id": "system", 00:25:57.310 "dma_device_type": 1 00:25:57.310 }, 00:25:57.310 { 00:25:57.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.310 "dma_device_type": 2 00:25:57.310 } 00:25:57.310 ], 00:25:57.310 "driver_specific": {} 00:25:57.310 } 00:25:57.310 ] 00:25:57.310 07:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:57.310 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:25:57.310 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:25:57.310 07:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:57.568 [2024-05-16 07:38:51.058462] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:57.568 [2024-05-16 07:38:51.058530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:57.568 [2024-05-16 07:38:51.058537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.568 [2024-05-16 07:38:51.058926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:57.568 [2024-05-16 07:38:51.058936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.568 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.826 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.826 "name": "Existed_Raid", 00:25:57.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.826 "strip_size_kb": 0, 00:25:57.826 "state": "configuring", 00:25:57.826 "raid_level": "raid1", 00:25:57.826 "superblock": false, 00:25:57.826 "num_base_bdevs": 4, 00:25:57.826 "num_base_bdevs_discovered": 3, 00:25:57.826 "num_base_bdevs_operational": 4, 00:25:57.826 "base_bdevs_list": [ 00:25:57.826 { 00:25:57.826 "name": "BaseBdev1", 00:25:57.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.826 "is_configured": false, 00:25:57.826 "data_offset": 0, 00:25:57.826 "data_size": 0 00:25:57.826 }, 00:25:57.826 { 00:25:57.826 "name": "BaseBdev2", 00:25:57.826 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:25:57.826 "is_configured": true, 00:25:57.826 "data_offset": 0, 00:25:57.826 "data_size": 65536 00:25:57.826 }, 00:25:57.826 { 00:25:57.826 "name": "BaseBdev3", 00:25:57.826 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:25:57.826 "is_configured": true, 00:25:57.826 "data_offset": 0, 00:25:57.826 "data_size": 65536 00:25:57.826 }, 00:25:57.826 { 00:25:57.826 "name": "BaseBdev4", 00:25:57.826 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:25:57.826 "is_configured": true, 00:25:57.826 "data_offset": 0, 00:25:57.826 "data_size": 65536 00:25:57.826 } 00:25:57.826 ] 00:25:57.826 }' 00:25:57.826 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.826 07:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.083 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:58.341 [2024-05-16 07:38:51.818446] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.341 07:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.599 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:58.599 "name": "Existed_Raid", 00:25:58.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.599 "strip_size_kb": 0, 00:25:58.599 "state": "configuring", 00:25:58.599 "raid_level": "raid1", 00:25:58.599 "superblock": false, 00:25:58.599 "num_base_bdevs": 4, 00:25:58.599 "num_base_bdevs_discovered": 2, 00:25:58.599 "num_base_bdevs_operational": 4, 00:25:58.599 "base_bdevs_list": [ 00:25:58.599 { 00:25:58.599 "name": "BaseBdev1", 00:25:58.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.599 "is_configured": false, 00:25:58.599 "data_offset": 0, 00:25:58.599 "data_size": 0 00:25:58.599 }, 00:25:58.599 { 00:25:58.599 "name": null, 00:25:58.599 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:25:58.599 "is_configured": false, 00:25:58.599 "data_offset": 0, 00:25:58.599 "data_size": 65536 00:25:58.599 }, 00:25:58.599 { 00:25:58.599 "name": "BaseBdev3", 00:25:58.599 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:25:58.599 "is_configured": true, 00:25:58.599 "data_offset": 0, 00:25:58.599 "data_size": 65536 00:25:58.599 }, 00:25:58.599 { 00:25:58.599 "name": "BaseBdev4", 00:25:58.599 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:25:58.599 "is_configured": true, 00:25:58.599 "data_offset": 0, 00:25:58.599 "data_size": 65536 00:25:58.599 } 00:25:58.599 ] 00:25:58.599 }' 00:25:58.599 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:58.599 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.165 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.165 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:59.165 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:25:59.165 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:59.428 [2024-05-16 07:38:52.970574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:59.428 BaseBdev1 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:59.686 07:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:59.944 07:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:59.944 [ 00:25:59.944 { 00:25:59.944 "name": "BaseBdev1", 00:25:59.944 "aliases": [ 00:25:59.944 "57e862ef-1357-11ef-8e8f-9dd684e56d79" 00:25:59.944 ], 00:25:59.944 "product_name": "Malloc disk", 00:25:59.944 "block_size": 512, 00:25:59.944 "num_blocks": 65536, 00:25:59.944 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:25:59.944 "assigned_rate_limits": { 00:25:59.944 "rw_ios_per_sec": 0, 00:25:59.944 "rw_mbytes_per_sec": 0, 00:25:59.944 "r_mbytes_per_sec": 0, 00:25:59.944 "w_mbytes_per_sec": 0 00:25:59.944 }, 00:25:59.944 "claimed": true, 00:25:59.944 "claim_type": "exclusive_write", 00:25:59.944 "zoned": false, 00:25:59.944 "supported_io_types": { 00:25:59.944 "read": true, 00:25:59.944 "write": true, 00:25:59.944 "unmap": true, 00:25:59.944 "write_zeroes": true, 00:25:59.944 "flush": true, 00:25:59.944 "reset": true, 00:25:59.944 "compare": false, 00:25:59.944 "compare_and_write": false, 00:25:59.944 "abort": true, 00:25:59.944 "nvme_admin": false, 00:25:59.944 "nvme_io": false 00:25:59.944 }, 00:25:59.944 "memory_domains": [ 00:25:59.944 { 00:25:59.944 "dma_device_id": "system", 00:25:59.944 "dma_device_type": 1 00:25:59.944 }, 00:25:59.944 { 00:25:59.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.944 "dma_device_type": 2 00:25:59.944 } 00:25:59.944 ], 00:25:59.944 "driver_specific": {} 00:25:59.944 } 00:25:59.944 ] 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.203 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.460 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.460 "name": "Existed_Raid", 00:26:00.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.460 "strip_size_kb": 0, 00:26:00.460 "state": "configuring", 00:26:00.460 "raid_level": "raid1", 00:26:00.460 "superblock": false, 00:26:00.460 "num_base_bdevs": 4, 00:26:00.460 "num_base_bdevs_discovered": 3, 00:26:00.460 "num_base_bdevs_operational": 4, 00:26:00.460 "base_bdevs_list": [ 00:26:00.460 { 00:26:00.460 "name": "BaseBdev1", 00:26:00.460 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:00.460 "is_configured": true, 00:26:00.460 "data_offset": 0, 00:26:00.460 "data_size": 65536 00:26:00.460 }, 00:26:00.460 { 00:26:00.460 "name": null, 00:26:00.460 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:00.460 "is_configured": false, 00:26:00.460 "data_offset": 0, 00:26:00.460 "data_size": 65536 00:26:00.460 }, 00:26:00.460 { 00:26:00.460 "name": "BaseBdev3", 00:26:00.460 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:00.460 "is_configured": true, 00:26:00.460 "data_offset": 0, 00:26:00.460 "data_size": 65536 00:26:00.460 }, 00:26:00.460 { 00:26:00.460 "name": "BaseBdev4", 00:26:00.460 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:00.460 "is_configured": true, 00:26:00.460 "data_offset": 0, 00:26:00.460 "data_size": 65536 00:26:00.460 } 00:26:00.460 ] 00:26:00.460 }' 00:26:00.460 07:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.460 07:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.717 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.717 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:01.282 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:01.282 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:01.282 [2024-05-16 07:38:54.823323] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.541 07:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.799 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:01.799 "name": "Existed_Raid", 00:26:01.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.799 "strip_size_kb": 0, 00:26:01.799 "state": "configuring", 00:26:01.799 "raid_level": "raid1", 00:26:01.799 "superblock": false, 00:26:01.799 "num_base_bdevs": 4, 00:26:01.799 "num_base_bdevs_discovered": 2, 00:26:01.799 "num_base_bdevs_operational": 4, 00:26:01.799 "base_bdevs_list": [ 00:26:01.799 { 00:26:01.799 "name": "BaseBdev1", 00:26:01.799 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:01.799 "is_configured": true, 00:26:01.799 "data_offset": 0, 00:26:01.799 "data_size": 65536 00:26:01.799 }, 00:26:01.799 { 00:26:01.799 "name": null, 00:26:01.799 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:01.799 "is_configured": false, 00:26:01.799 "data_offset": 0, 00:26:01.799 "data_size": 65536 00:26:01.799 }, 00:26:01.799 { 00:26:01.799 "name": null, 00:26:01.799 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:01.799 "is_configured": false, 00:26:01.799 "data_offset": 0, 00:26:01.799 "data_size": 65536 00:26:01.799 }, 00:26:01.799 { 00:26:01.799 "name": "BaseBdev4", 00:26:01.799 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:01.799 "is_configured": true, 00:26:01.799 "data_offset": 0, 00:26:01.799 "data_size": 65536 00:26:01.799 } 00:26:01.799 ] 00:26:01.799 }' 00:26:01.799 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:01.799 07:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.057 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.057 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:02.315 [2024-05-16 07:38:55.807311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.315 07:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.572 07:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:02.572 "name": "Existed_Raid", 00:26:02.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.572 "strip_size_kb": 0, 00:26:02.572 "state": "configuring", 00:26:02.572 "raid_level": "raid1", 00:26:02.572 "superblock": false, 00:26:02.572 "num_base_bdevs": 4, 00:26:02.572 "num_base_bdevs_discovered": 3, 00:26:02.572 "num_base_bdevs_operational": 4, 00:26:02.572 "base_bdevs_list": [ 00:26:02.572 { 00:26:02.572 "name": "BaseBdev1", 00:26:02.572 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:02.572 "is_configured": true, 00:26:02.572 "data_offset": 0, 00:26:02.572 "data_size": 65536 00:26:02.572 }, 00:26:02.572 { 00:26:02.572 "name": null, 00:26:02.572 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:02.572 "is_configured": false, 00:26:02.572 "data_offset": 0, 00:26:02.572 "data_size": 65536 00:26:02.572 }, 00:26:02.572 { 00:26:02.572 "name": "BaseBdev3", 00:26:02.572 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:02.572 "is_configured": true, 00:26:02.572 "data_offset": 0, 00:26:02.572 "data_size": 65536 00:26:02.572 }, 00:26:02.572 { 00:26:02.572 "name": "BaseBdev4", 00:26:02.572 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:02.572 "is_configured": true, 00:26:02.572 "data_offset": 0, 00:26:02.572 "data_size": 65536 00:26:02.572 } 00:26:02.572 ] 00:26:02.572 }' 00:26:02.572 07:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:02.572 07:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.144 07:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.144 07:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:03.402 07:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:26:03.402 07:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:03.661 [2024-05-16 07:38:56.991326] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.661 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.919 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:03.919 "name": "Existed_Raid", 00:26:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.919 "strip_size_kb": 0, 00:26:03.919 "state": "configuring", 00:26:03.919 "raid_level": "raid1", 00:26:03.919 "superblock": false, 00:26:03.919 "num_base_bdevs": 4, 00:26:03.919 "num_base_bdevs_discovered": 2, 00:26:03.919 "num_base_bdevs_operational": 4, 00:26:03.919 "base_bdevs_list": [ 00:26:03.919 { 00:26:03.919 "name": null, 00:26:03.919 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:03.919 "is_configured": false, 00:26:03.919 "data_offset": 0, 00:26:03.919 "data_size": 65536 00:26:03.919 }, 00:26:03.919 { 00:26:03.919 "name": null, 00:26:03.919 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:03.919 "is_configured": false, 00:26:03.919 "data_offset": 0, 00:26:03.919 "data_size": 65536 00:26:03.919 }, 00:26:03.919 { 00:26:03.919 "name": "BaseBdev3", 00:26:03.919 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:03.919 "is_configured": true, 00:26:03.919 "data_offset": 0, 00:26:03.919 "data_size": 65536 00:26:03.919 }, 00:26:03.919 { 00:26:03.919 "name": "BaseBdev4", 00:26:03.919 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:03.919 "is_configured": true, 00:26:03.919 "data_offset": 0, 00:26:03.919 "data_size": 65536 00:26:03.919 } 00:26:03.919 ] 00:26:03.919 }' 00:26:03.919 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:03.919 07:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.178 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.178 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:04.446 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:26:04.446 07:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:04.704 [2024-05-16 07:38:58.044061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.704 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.962 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:04.963 "name": "Existed_Raid", 00:26:04.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.963 "strip_size_kb": 0, 00:26:04.963 "state": "configuring", 00:26:04.963 "raid_level": "raid1", 00:26:04.963 "superblock": false, 00:26:04.963 "num_base_bdevs": 4, 00:26:04.963 "num_base_bdevs_discovered": 3, 00:26:04.963 "num_base_bdevs_operational": 4, 00:26:04.963 "base_bdevs_list": [ 00:26:04.963 { 00:26:04.963 "name": null, 00:26:04.963 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:04.963 "is_configured": false, 00:26:04.963 "data_offset": 0, 00:26:04.963 "data_size": 65536 00:26:04.963 }, 00:26:04.963 { 00:26:04.963 "name": "BaseBdev2", 00:26:04.963 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:04.963 "is_configured": true, 00:26:04.963 "data_offset": 0, 00:26:04.963 "data_size": 65536 00:26:04.963 }, 00:26:04.963 { 00:26:04.963 "name": "BaseBdev3", 00:26:04.963 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:04.963 "is_configured": true, 00:26:04.963 "data_offset": 0, 00:26:04.963 "data_size": 65536 00:26:04.963 }, 00:26:04.963 { 00:26:04.963 "name": "BaseBdev4", 00:26:04.963 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:04.963 "is_configured": true, 00:26:04.963 "data_offset": 0, 00:26:04.963 "data_size": 65536 00:26:04.963 } 00:26:04.963 ] 00:26:04.963 }' 00:26:04.963 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:04.963 07:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.222 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.222 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:05.481 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:26:05.481 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.481 07:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:05.739 07:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 57e862ef-1357-11ef-8e8f-9dd684e56d79 00:26:05.998 [2024-05-16 07:38:59.448162] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:05.998 [2024-05-16 07:38:59.448189] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b3b0f00 00:26:05.998 [2024-05-16 07:38:59.448194] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:05.998 [2024-05-16 07:38:59.448216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b413e20 00:26:05.998 [2024-05-16 07:38:59.448272] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b3b0f00 00:26:05.998 [2024-05-16 07:38:59.448276] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b3b0f00 00:26:05.998 [2024-05-16 07:38:59.448305] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.998 NewBaseBdev 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:05.998 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:06.258 07:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:06.520 [ 00:26:06.520 { 00:26:06.520 "name": "NewBaseBdev", 00:26:06.520 "aliases": [ 00:26:06.520 "57e862ef-1357-11ef-8e8f-9dd684e56d79" 00:26:06.520 ], 00:26:06.520 "product_name": "Malloc disk", 00:26:06.520 "block_size": 512, 00:26:06.520 "num_blocks": 65536, 00:26:06.520 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:06.520 "assigned_rate_limits": { 00:26:06.520 "rw_ios_per_sec": 0, 00:26:06.520 "rw_mbytes_per_sec": 0, 00:26:06.520 "r_mbytes_per_sec": 0, 00:26:06.520 "w_mbytes_per_sec": 0 00:26:06.520 }, 00:26:06.520 "claimed": true, 00:26:06.520 "claim_type": "exclusive_write", 00:26:06.520 "zoned": false, 00:26:06.520 "supported_io_types": { 00:26:06.520 "read": true, 00:26:06.520 "write": true, 00:26:06.520 "unmap": true, 00:26:06.520 "write_zeroes": true, 00:26:06.520 "flush": true, 00:26:06.520 "reset": true, 00:26:06.520 "compare": false, 00:26:06.520 "compare_and_write": false, 00:26:06.520 "abort": true, 00:26:06.520 "nvme_admin": false, 00:26:06.520 "nvme_io": false 00:26:06.520 }, 00:26:06.520 "memory_domains": [ 00:26:06.520 { 00:26:06.520 "dma_device_id": "system", 00:26:06.520 "dma_device_type": 1 00:26:06.520 }, 00:26:06.520 { 00:26:06.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.520 "dma_device_type": 2 00:26:06.520 } 00:26:06.520 ], 00:26:06.520 "driver_specific": {} 00:26:06.520 } 00:26:06.520 ] 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.520 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.779 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:06.779 "name": "Existed_Raid", 00:26:06.779 "uuid": "5bc4cf51-1357-11ef-8e8f-9dd684e56d79", 00:26:06.779 "strip_size_kb": 0, 00:26:06.779 "state": "online", 00:26:06.779 "raid_level": "raid1", 00:26:06.779 "superblock": false, 00:26:06.779 "num_base_bdevs": 4, 00:26:06.779 "num_base_bdevs_discovered": 4, 00:26:06.779 "num_base_bdevs_operational": 4, 00:26:06.779 "base_bdevs_list": [ 00:26:06.779 { 00:26:06.779 "name": "NewBaseBdev", 00:26:06.779 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:06.779 "is_configured": true, 00:26:06.779 "data_offset": 0, 00:26:06.779 "data_size": 65536 00:26:06.779 }, 00:26:06.779 { 00:26:06.779 "name": "BaseBdev2", 00:26:06.779 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:06.779 "is_configured": true, 00:26:06.779 "data_offset": 0, 00:26:06.779 "data_size": 65536 00:26:06.779 }, 00:26:06.779 { 00:26:06.779 "name": "BaseBdev3", 00:26:06.779 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:06.779 "is_configured": true, 00:26:06.779 "data_offset": 0, 00:26:06.779 "data_size": 65536 00:26:06.779 }, 00:26:06.779 { 00:26:06.779 "name": "BaseBdev4", 00:26:06.779 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:06.779 "is_configured": true, 00:26:06.779 "data_offset": 0, 00:26:06.779 "data_size": 65536 00:26:06.779 } 00:26:06.779 ] 00:26:06.779 }' 00:26:06.779 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:06.779 07:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # local name 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:07.347 [2024-05-16 07:39:00.804086] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:07.347 "name": "Existed_Raid", 00:26:07.347 "aliases": [ 00:26:07.347 "5bc4cf51-1357-11ef-8e8f-9dd684e56d79" 00:26:07.347 ], 00:26:07.347 "product_name": "Raid Volume", 00:26:07.347 "block_size": 512, 00:26:07.347 "num_blocks": 65536, 00:26:07.347 "uuid": "5bc4cf51-1357-11ef-8e8f-9dd684e56d79", 00:26:07.347 "assigned_rate_limits": { 00:26:07.347 "rw_ios_per_sec": 0, 00:26:07.347 "rw_mbytes_per_sec": 0, 00:26:07.347 "r_mbytes_per_sec": 0, 00:26:07.347 "w_mbytes_per_sec": 0 00:26:07.347 }, 00:26:07.347 "claimed": false, 00:26:07.347 "zoned": false, 00:26:07.347 "supported_io_types": { 00:26:07.347 "read": true, 00:26:07.347 "write": true, 00:26:07.347 "unmap": false, 00:26:07.347 "write_zeroes": true, 00:26:07.347 "flush": false, 00:26:07.347 "reset": true, 00:26:07.347 "compare": false, 00:26:07.347 "compare_and_write": false, 00:26:07.347 "abort": false, 00:26:07.347 "nvme_admin": false, 00:26:07.347 "nvme_io": false 00:26:07.347 }, 00:26:07.347 "memory_domains": [ 00:26:07.347 { 00:26:07.347 "dma_device_id": "system", 00:26:07.347 "dma_device_type": 1 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.347 "dma_device_type": 2 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "system", 00:26:07.347 "dma_device_type": 1 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.347 "dma_device_type": 2 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "system", 00:26:07.347 "dma_device_type": 1 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.347 "dma_device_type": 2 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "system", 00:26:07.347 "dma_device_type": 1 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.347 "dma_device_type": 2 00:26:07.347 } 00:26:07.347 ], 00:26:07.347 "driver_specific": { 00:26:07.347 "raid": { 00:26:07.347 "uuid": "5bc4cf51-1357-11ef-8e8f-9dd684e56d79", 00:26:07.347 "strip_size_kb": 0, 00:26:07.347 "state": "online", 00:26:07.347 "raid_level": "raid1", 00:26:07.347 "superblock": false, 00:26:07.347 "num_base_bdevs": 4, 00:26:07.347 "num_base_bdevs_discovered": 4, 00:26:07.347 "num_base_bdevs_operational": 4, 00:26:07.347 "base_bdevs_list": [ 00:26:07.347 { 00:26:07.347 "name": "NewBaseBdev", 00:26:07.347 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:07.347 "is_configured": true, 00:26:07.347 "data_offset": 0, 00:26:07.347 "data_size": 65536 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "name": "BaseBdev2", 00:26:07.347 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:07.347 "is_configured": true, 00:26:07.347 "data_offset": 0, 00:26:07.347 "data_size": 65536 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "name": "BaseBdev3", 00:26:07.347 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:07.347 "is_configured": true, 00:26:07.347 "data_offset": 0, 00:26:07.347 "data_size": 65536 00:26:07.347 }, 00:26:07.347 { 00:26:07.347 "name": "BaseBdev4", 00:26:07.347 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:07.347 "is_configured": true, 00:26:07.347 "data_offset": 0, 00:26:07.347 "data_size": 65536 00:26:07.347 } 00:26:07.347 ] 00:26:07.347 } 00:26:07.347 } 00:26:07.347 }' 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:26:07.347 BaseBdev2 00:26:07.347 BaseBdev3 00:26:07.347 BaseBdev4' 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:07.347 07:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:07.605 "name": "NewBaseBdev", 00:26:07.605 "aliases": [ 00:26:07.605 "57e862ef-1357-11ef-8e8f-9dd684e56d79" 00:26:07.605 ], 00:26:07.605 "product_name": "Malloc disk", 00:26:07.605 "block_size": 512, 00:26:07.605 "num_blocks": 65536, 00:26:07.605 "uuid": "57e862ef-1357-11ef-8e8f-9dd684e56d79", 00:26:07.605 "assigned_rate_limits": { 00:26:07.605 "rw_ios_per_sec": 0, 00:26:07.605 "rw_mbytes_per_sec": 0, 00:26:07.605 "r_mbytes_per_sec": 0, 00:26:07.605 "w_mbytes_per_sec": 0 00:26:07.605 }, 00:26:07.605 "claimed": true, 00:26:07.605 "claim_type": "exclusive_write", 00:26:07.605 "zoned": false, 00:26:07.605 "supported_io_types": { 00:26:07.605 "read": true, 00:26:07.605 "write": true, 00:26:07.605 "unmap": true, 00:26:07.605 "write_zeroes": true, 00:26:07.605 "flush": true, 00:26:07.605 "reset": true, 00:26:07.605 "compare": false, 00:26:07.605 "compare_and_write": false, 00:26:07.605 "abort": true, 00:26:07.605 "nvme_admin": false, 00:26:07.605 "nvme_io": false 00:26:07.605 }, 00:26:07.605 "memory_domains": [ 00:26:07.605 { 00:26:07.605 "dma_device_id": "system", 00:26:07.605 "dma_device_type": 1 00:26:07.605 }, 00:26:07.605 { 00:26:07.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.605 "dma_device_type": 2 00:26:07.605 } 00:26:07.605 ], 00:26:07.605 "driver_specific": {} 00:26:07.605 }' 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:07.605 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:07.606 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:07.606 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:07.606 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:07.863 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:07.863 "name": "BaseBdev2", 00:26:07.863 "aliases": [ 00:26:07.863 "558397c8-1357-11ef-8e8f-9dd684e56d79" 00:26:07.863 ], 00:26:07.863 "product_name": "Malloc disk", 00:26:07.863 "block_size": 512, 00:26:07.863 "num_blocks": 65536, 00:26:07.863 "uuid": "558397c8-1357-11ef-8e8f-9dd684e56d79", 00:26:07.863 "assigned_rate_limits": { 00:26:07.863 "rw_ios_per_sec": 0, 00:26:07.863 "rw_mbytes_per_sec": 0, 00:26:07.863 "r_mbytes_per_sec": 0, 00:26:07.863 "w_mbytes_per_sec": 0 00:26:07.863 }, 00:26:07.863 "claimed": true, 00:26:07.863 "claim_type": "exclusive_write", 00:26:07.863 "zoned": false, 00:26:07.863 "supported_io_types": { 00:26:07.863 "read": true, 00:26:07.863 "write": true, 00:26:07.863 "unmap": true, 00:26:07.863 "write_zeroes": true, 00:26:07.864 "flush": true, 00:26:07.864 "reset": true, 00:26:07.864 "compare": false, 00:26:07.864 "compare_and_write": false, 00:26:07.864 "abort": true, 00:26:07.864 "nvme_admin": false, 00:26:07.864 "nvme_io": false 00:26:07.864 }, 00:26:07.864 "memory_domains": [ 00:26:07.864 { 00:26:07.864 "dma_device_id": "system", 00:26:07.864 "dma_device_type": 1 00:26:07.864 }, 00:26:07.864 { 00:26:07.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.864 "dma_device_type": 2 00:26:07.864 } 00:26:07.864 ], 00:26:07.864 "driver_specific": {} 00:26:07.864 }' 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:07.864 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:08.122 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:08.122 "name": "BaseBdev3", 00:26:08.122 "aliases": [ 00:26:08.122 "55f6f309-1357-11ef-8e8f-9dd684e56d79" 00:26:08.122 ], 00:26:08.123 "product_name": "Malloc disk", 00:26:08.123 "block_size": 512, 00:26:08.123 "num_blocks": 65536, 00:26:08.123 "uuid": "55f6f309-1357-11ef-8e8f-9dd684e56d79", 00:26:08.123 "assigned_rate_limits": { 00:26:08.123 "rw_ios_per_sec": 0, 00:26:08.123 "rw_mbytes_per_sec": 0, 00:26:08.123 "r_mbytes_per_sec": 0, 00:26:08.123 "w_mbytes_per_sec": 0 00:26:08.123 }, 00:26:08.123 "claimed": true, 00:26:08.123 "claim_type": "exclusive_write", 00:26:08.123 "zoned": false, 00:26:08.123 "supported_io_types": { 00:26:08.123 "read": true, 00:26:08.123 "write": true, 00:26:08.123 "unmap": true, 00:26:08.123 "write_zeroes": true, 00:26:08.123 "flush": true, 00:26:08.123 "reset": true, 00:26:08.123 "compare": false, 00:26:08.123 "compare_and_write": false, 00:26:08.123 "abort": true, 00:26:08.123 "nvme_admin": false, 00:26:08.123 "nvme_io": false 00:26:08.123 }, 00:26:08.123 "memory_domains": [ 00:26:08.123 { 00:26:08.123 "dma_device_id": "system", 00:26:08.123 "dma_device_type": 1 00:26:08.123 }, 00:26:08.123 { 00:26:08.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.123 "dma_device_type": 2 00:26:08.123 } 00:26:08.123 ], 00:26:08.123 "driver_specific": {} 00:26:08.123 }' 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.123 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:08.392 "name": "BaseBdev4", 00:26:08.392 "aliases": [ 00:26:08.392 "566433fc-1357-11ef-8e8f-9dd684e56d79" 00:26:08.392 ], 00:26:08.392 "product_name": "Malloc disk", 00:26:08.392 "block_size": 512, 00:26:08.392 "num_blocks": 65536, 00:26:08.392 "uuid": "566433fc-1357-11ef-8e8f-9dd684e56d79", 00:26:08.392 "assigned_rate_limits": { 00:26:08.392 "rw_ios_per_sec": 0, 00:26:08.392 "rw_mbytes_per_sec": 0, 00:26:08.392 "r_mbytes_per_sec": 0, 00:26:08.392 "w_mbytes_per_sec": 0 00:26:08.392 }, 00:26:08.392 "claimed": true, 00:26:08.392 "claim_type": "exclusive_write", 00:26:08.392 "zoned": false, 00:26:08.392 "supported_io_types": { 00:26:08.392 "read": true, 00:26:08.392 "write": true, 00:26:08.392 "unmap": true, 00:26:08.392 "write_zeroes": true, 00:26:08.392 "flush": true, 00:26:08.392 "reset": true, 00:26:08.392 "compare": false, 00:26:08.392 "compare_and_write": false, 00:26:08.392 "abort": true, 00:26:08.392 "nvme_admin": false, 00:26:08.392 "nvme_io": false 00:26:08.392 }, 00:26:08.392 "memory_domains": [ 00:26:08.392 { 00:26:08.392 "dma_device_id": "system", 00:26:08.392 "dma_device_type": 1 00:26:08.392 }, 00:26:08.392 { 00:26:08.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.392 "dma_device_type": 2 00:26:08.392 } 00:26:08.392 ], 00:26:08.392 "driver_specific": {} 00:26:08.392 }' 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:08.392 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:08.651 07:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:08.909 [2024-05-16 07:39:02.216050] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:08.909 [2024-05-16 07:39:02.216075] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:08.909 [2024-05-16 07:39:02.216095] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.909 [2024-05-16 07:39:02.216164] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.909 [2024-05-16 07:39:02.216168] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b3b0f00 name Existed_Raid, state offline 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@342 -- # killprocess 61698 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 61698 ']' 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 61698 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps -c -o command 61698 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # tail -1 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:26:08.909 killing process with pid 61698 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61698' 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 61698 00:26:08.909 [2024-05-16 07:39:02.242887] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 61698 00:26:08.909 [2024-05-16 07:39:02.262105] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@344 -- # return 0 00:26:08.909 00:26:08.909 real 0m26.270s 00:26:08.909 user 0m48.007s 00:26:08.909 sys 0m3.749s 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:08.909 07:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.909 ************************************ 00:26:08.909 END TEST raid_state_function_test 00:26:08.909 ************************************ 00:26:08.909 07:39:02 bdev_raid -- bdev/bdev_raid.sh@804 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:26:08.909 07:39:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:08.909 07:39:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:08.909 07:39:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.168 ************************************ 00:26:09.168 START TEST raid_state_function_test_sb 00:26:09.168 ************************************ 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=4 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev3 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # echo BaseBdev4 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # raid_pid=62509 00:26:09.168 Process raid pid: 62509 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 62509' 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@247 -- # waitforlisten 62509 /var/tmp/spdk-raid.sock 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 62509 ']' 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:09.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:09.168 07:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.168 [2024-05-16 07:39:02.482132] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:09.168 [2024-05-16 07:39:02.482343] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:26:09.427 EAL: TSC is not safe to use in SMP mode 00:26:09.427 EAL: TSC is not invariant 00:26:09.427 [2024-05-16 07:39:02.950367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.686 [2024-05-16 07:39:03.034131] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:09.686 [2024-05-16 07:39:03.036243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.686 [2024-05-16 07:39:03.036925] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:09.686 [2024-05-16 07:39:03.036936] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:10.254 [2024-05-16 07:39:03.739637] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:10.254 [2024-05-16 07:39:03.739697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:10.254 [2024-05-16 07:39:03.739702] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:10.254 [2024-05-16 07:39:03.739709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:10.254 [2024-05-16 07:39:03.739713] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:10.254 [2024-05-16 07:39:03.739719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:10.254 [2024-05-16 07:39:03.739722] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:10.254 [2024-05-16 07:39:03.739728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.254 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.512 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.512 "name": "Existed_Raid", 00:26:10.512 "uuid": "5e53a158-1357-11ef-8e8f-9dd684e56d79", 00:26:10.512 "strip_size_kb": 0, 00:26:10.512 "state": "configuring", 00:26:10.512 "raid_level": "raid1", 00:26:10.512 "superblock": true, 00:26:10.512 "num_base_bdevs": 4, 00:26:10.512 "num_base_bdevs_discovered": 0, 00:26:10.512 "num_base_bdevs_operational": 4, 00:26:10.512 "base_bdevs_list": [ 00:26:10.512 { 00:26:10.512 "name": "BaseBdev1", 00:26:10.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.512 "is_configured": false, 00:26:10.512 "data_offset": 0, 00:26:10.512 "data_size": 0 00:26:10.512 }, 00:26:10.512 { 00:26:10.512 "name": "BaseBdev2", 00:26:10.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.512 "is_configured": false, 00:26:10.512 "data_offset": 0, 00:26:10.512 "data_size": 0 00:26:10.512 }, 00:26:10.512 { 00:26:10.512 "name": "BaseBdev3", 00:26:10.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.512 "is_configured": false, 00:26:10.512 "data_offset": 0, 00:26:10.512 "data_size": 0 00:26:10.512 }, 00:26:10.512 { 00:26:10.512 "name": "BaseBdev4", 00:26:10.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.512 "is_configured": false, 00:26:10.512 "data_offset": 0, 00:26:10.512 "data_size": 0 00:26:10.512 } 00:26:10.512 ] 00:26:10.512 }' 00:26:10.512 07:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.512 07:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.771 07:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:11.029 [2024-05-16 07:39:04.519600] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.029 [2024-05-16 07:39:04.519623] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b83c500 name Existed_Raid, state configuring 00:26:11.029 07:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:11.288 [2024-05-16 07:39:04.731622] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.288 [2024-05-16 07:39:04.731673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.288 [2024-05-16 07:39:04.731694] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.288 [2024-05-16 07:39:04.731701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.288 [2024-05-16 07:39:04.731705] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.288 [2024-05-16 07:39:04.731711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.288 [2024-05-16 07:39:04.731714] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:11.288 [2024-05-16 07:39:04.731721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:11.288 07:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:11.606 [2024-05-16 07:39:04.936501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.606 BaseBdev1 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:11.606 07:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.866 07:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:12.124 [ 00:26:12.124 { 00:26:12.124 "name": "BaseBdev1", 00:26:12.124 "aliases": [ 00:26:12.124 "5f0a1f55-1357-11ef-8e8f-9dd684e56d79" 00:26:12.124 ], 00:26:12.124 "product_name": "Malloc disk", 00:26:12.124 "block_size": 512, 00:26:12.124 "num_blocks": 65536, 00:26:12.124 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:12.124 "assigned_rate_limits": { 00:26:12.124 "rw_ios_per_sec": 0, 00:26:12.124 "rw_mbytes_per_sec": 0, 00:26:12.124 "r_mbytes_per_sec": 0, 00:26:12.124 "w_mbytes_per_sec": 0 00:26:12.124 }, 00:26:12.124 "claimed": true, 00:26:12.124 "claim_type": "exclusive_write", 00:26:12.124 "zoned": false, 00:26:12.124 "supported_io_types": { 00:26:12.124 "read": true, 00:26:12.124 "write": true, 00:26:12.124 "unmap": true, 00:26:12.124 "write_zeroes": true, 00:26:12.124 "flush": true, 00:26:12.124 "reset": true, 00:26:12.124 "compare": false, 00:26:12.125 "compare_and_write": false, 00:26:12.125 "abort": true, 00:26:12.125 "nvme_admin": false, 00:26:12.125 "nvme_io": false 00:26:12.125 }, 00:26:12.125 "memory_domains": [ 00:26:12.125 { 00:26:12.125 "dma_device_id": "system", 00:26:12.125 "dma_device_type": 1 00:26:12.125 }, 00:26:12.125 { 00:26:12.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.125 "dma_device_type": 2 00:26:12.125 } 00:26:12.125 ], 00:26:12.125 "driver_specific": {} 00:26:12.125 } 00:26:12.125 ] 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.125 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.391 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:12.391 "name": "Existed_Raid", 00:26:12.391 "uuid": "5eeafed0-1357-11ef-8e8f-9dd684e56d79", 00:26:12.391 "strip_size_kb": 0, 00:26:12.391 "state": "configuring", 00:26:12.391 "raid_level": "raid1", 00:26:12.391 "superblock": true, 00:26:12.391 "num_base_bdevs": 4, 00:26:12.391 "num_base_bdevs_discovered": 1, 00:26:12.391 "num_base_bdevs_operational": 4, 00:26:12.391 "base_bdevs_list": [ 00:26:12.391 { 00:26:12.391 "name": "BaseBdev1", 00:26:12.391 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:12.391 "is_configured": true, 00:26:12.391 "data_offset": 2048, 00:26:12.391 "data_size": 63488 00:26:12.391 }, 00:26:12.391 { 00:26:12.391 "name": "BaseBdev2", 00:26:12.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.391 "is_configured": false, 00:26:12.391 "data_offset": 0, 00:26:12.391 "data_size": 0 00:26:12.391 }, 00:26:12.391 { 00:26:12.391 "name": "BaseBdev3", 00:26:12.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.391 "is_configured": false, 00:26:12.391 "data_offset": 0, 00:26:12.391 "data_size": 0 00:26:12.391 }, 00:26:12.391 { 00:26:12.391 "name": "BaseBdev4", 00:26:12.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.391 "is_configured": false, 00:26:12.391 "data_offset": 0, 00:26:12.391 "data_size": 0 00:26:12.391 } 00:26:12.391 ] 00:26:12.391 }' 00:26:12.391 07:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:12.391 07:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.653 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:12.912 [2024-05-16 07:39:06.299604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:12.912 [2024-05-16 07:39:06.299632] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b83c500 name Existed_Raid, state configuring 00:26:12.912 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:13.171 [2024-05-16 07:39:06.563622] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.171 [2024-05-16 07:39:06.564274] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:13.171 [2024-05-16 07:39:06.564311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:13.171 [2024-05-16 07:39:06.564315] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:13.171 [2024-05-16 07:39:06.564339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:13.171 [2024-05-16 07:39:06.564343] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:13.171 [2024-05-16 07:39:06.564350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.171 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.429 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.429 "name": "Existed_Raid", 00:26:13.429 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:13.429 "strip_size_kb": 0, 00:26:13.429 "state": "configuring", 00:26:13.429 "raid_level": "raid1", 00:26:13.429 "superblock": true, 00:26:13.429 "num_base_bdevs": 4, 00:26:13.429 "num_base_bdevs_discovered": 1, 00:26:13.429 "num_base_bdevs_operational": 4, 00:26:13.429 "base_bdevs_list": [ 00:26:13.429 { 00:26:13.429 "name": "BaseBdev1", 00:26:13.429 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:13.429 "is_configured": true, 00:26:13.429 "data_offset": 2048, 00:26:13.429 "data_size": 63488 00:26:13.429 }, 00:26:13.429 { 00:26:13.429 "name": "BaseBdev2", 00:26:13.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.429 "is_configured": false, 00:26:13.430 "data_offset": 0, 00:26:13.430 "data_size": 0 00:26:13.430 }, 00:26:13.430 { 00:26:13.430 "name": "BaseBdev3", 00:26:13.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.430 "is_configured": false, 00:26:13.430 "data_offset": 0, 00:26:13.430 "data_size": 0 00:26:13.430 }, 00:26:13.430 { 00:26:13.430 "name": "BaseBdev4", 00:26:13.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.430 "is_configured": false, 00:26:13.430 "data_offset": 0, 00:26:13.430 "data_size": 0 00:26:13.430 } 00:26:13.430 ] 00:26:13.430 }' 00:26:13.430 07:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.430 07:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.688 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:13.947 [2024-05-16 07:39:07.331706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.947 BaseBdev2 00:26:13.947 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:26:13.947 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:13.948 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:13.948 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:13.948 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:13.948 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:13.948 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.205 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:14.465 [ 00:26:14.465 { 00:26:14.465 "name": "BaseBdev2", 00:26:14.465 "aliases": [ 00:26:14.465 "6077b8ed-1357-11ef-8e8f-9dd684e56d79" 00:26:14.465 ], 00:26:14.465 "product_name": "Malloc disk", 00:26:14.465 "block_size": 512, 00:26:14.465 "num_blocks": 65536, 00:26:14.465 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:14.465 "assigned_rate_limits": { 00:26:14.465 "rw_ios_per_sec": 0, 00:26:14.465 "rw_mbytes_per_sec": 0, 00:26:14.465 "r_mbytes_per_sec": 0, 00:26:14.465 "w_mbytes_per_sec": 0 00:26:14.465 }, 00:26:14.465 "claimed": true, 00:26:14.465 "claim_type": "exclusive_write", 00:26:14.465 "zoned": false, 00:26:14.465 "supported_io_types": { 00:26:14.465 "read": true, 00:26:14.465 "write": true, 00:26:14.465 "unmap": true, 00:26:14.465 "write_zeroes": true, 00:26:14.465 "flush": true, 00:26:14.465 "reset": true, 00:26:14.465 "compare": false, 00:26:14.465 "compare_and_write": false, 00:26:14.465 "abort": true, 00:26:14.465 "nvme_admin": false, 00:26:14.465 "nvme_io": false 00:26:14.465 }, 00:26:14.465 "memory_domains": [ 00:26:14.465 { 00:26:14.465 "dma_device_id": "system", 00:26:14.465 "dma_device_type": 1 00:26:14.465 }, 00:26:14.465 { 00:26:14.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.465 "dma_device_type": 2 00:26:14.465 } 00:26:14.465 ], 00:26:14.465 "driver_specific": {} 00:26:14.465 } 00:26:14.465 ] 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.465 07:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.802 07:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:14.802 "name": "Existed_Raid", 00:26:14.802 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:14.802 "strip_size_kb": 0, 00:26:14.802 "state": "configuring", 00:26:14.802 "raid_level": "raid1", 00:26:14.802 "superblock": true, 00:26:14.802 "num_base_bdevs": 4, 00:26:14.802 "num_base_bdevs_discovered": 2, 00:26:14.802 "num_base_bdevs_operational": 4, 00:26:14.802 "base_bdevs_list": [ 00:26:14.802 { 00:26:14.802 "name": "BaseBdev1", 00:26:14.802 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:14.802 "is_configured": true, 00:26:14.802 "data_offset": 2048, 00:26:14.802 "data_size": 63488 00:26:14.802 }, 00:26:14.802 { 00:26:14.802 "name": "BaseBdev2", 00:26:14.802 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:14.802 "is_configured": true, 00:26:14.802 "data_offset": 2048, 00:26:14.802 "data_size": 63488 00:26:14.802 }, 00:26:14.802 { 00:26:14.802 "name": "BaseBdev3", 00:26:14.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.802 "is_configured": false, 00:26:14.802 "data_offset": 0, 00:26:14.802 "data_size": 0 00:26:14.802 }, 00:26:14.802 { 00:26:14.802 "name": "BaseBdev4", 00:26:14.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.802 "is_configured": false, 00:26:14.802 "data_offset": 0, 00:26:14.802 "data_size": 0 00:26:14.802 } 00:26:14.802 ] 00:26:14.802 }' 00:26:14.802 07:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:14.802 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.074 07:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:15.332 [2024-05-16 07:39:08.643693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:15.332 BaseBdev3 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev3 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.332 07:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:15.589 [ 00:26:15.589 { 00:26:15.589 "name": "BaseBdev3", 00:26:15.589 "aliases": [ 00:26:15.589 "613feaec-1357-11ef-8e8f-9dd684e56d79" 00:26:15.589 ], 00:26:15.589 "product_name": "Malloc disk", 00:26:15.589 "block_size": 512, 00:26:15.589 "num_blocks": 65536, 00:26:15.589 "uuid": "613feaec-1357-11ef-8e8f-9dd684e56d79", 00:26:15.589 "assigned_rate_limits": { 00:26:15.589 "rw_ios_per_sec": 0, 00:26:15.589 "rw_mbytes_per_sec": 0, 00:26:15.589 "r_mbytes_per_sec": 0, 00:26:15.589 "w_mbytes_per_sec": 0 00:26:15.589 }, 00:26:15.589 "claimed": true, 00:26:15.589 "claim_type": "exclusive_write", 00:26:15.589 "zoned": false, 00:26:15.589 "supported_io_types": { 00:26:15.589 "read": true, 00:26:15.589 "write": true, 00:26:15.589 "unmap": true, 00:26:15.589 "write_zeroes": true, 00:26:15.589 "flush": true, 00:26:15.589 "reset": true, 00:26:15.589 "compare": false, 00:26:15.589 "compare_and_write": false, 00:26:15.589 "abort": true, 00:26:15.589 "nvme_admin": false, 00:26:15.589 "nvme_io": false 00:26:15.589 }, 00:26:15.589 "memory_domains": [ 00:26:15.589 { 00:26:15.589 "dma_device_id": "system", 00:26:15.589 "dma_device_type": 1 00:26:15.589 }, 00:26:15.589 { 00:26:15.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.590 "dma_device_type": 2 00:26:15.590 } 00:26:15.590 ], 00:26:15.590 "driver_specific": {} 00:26:15.590 } 00:26:15.590 ] 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.590 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.847 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:15.847 "name": "Existed_Raid", 00:26:15.847 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:15.847 "strip_size_kb": 0, 00:26:15.847 "state": "configuring", 00:26:15.847 "raid_level": "raid1", 00:26:15.847 "superblock": true, 00:26:15.847 "num_base_bdevs": 4, 00:26:15.847 "num_base_bdevs_discovered": 3, 00:26:15.847 "num_base_bdevs_operational": 4, 00:26:15.847 "base_bdevs_list": [ 00:26:15.847 { 00:26:15.847 "name": "BaseBdev1", 00:26:15.847 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:15.847 "is_configured": true, 00:26:15.847 "data_offset": 2048, 00:26:15.847 "data_size": 63488 00:26:15.847 }, 00:26:15.847 { 00:26:15.847 "name": "BaseBdev2", 00:26:15.847 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:15.847 "is_configured": true, 00:26:15.847 "data_offset": 2048, 00:26:15.847 "data_size": 63488 00:26:15.847 }, 00:26:15.847 { 00:26:15.847 "name": "BaseBdev3", 00:26:15.847 "uuid": "613feaec-1357-11ef-8e8f-9dd684e56d79", 00:26:15.847 "is_configured": true, 00:26:15.847 "data_offset": 2048, 00:26:15.847 "data_size": 63488 00:26:15.847 }, 00:26:15.847 { 00:26:15.847 "name": "BaseBdev4", 00:26:15.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.847 "is_configured": false, 00:26:15.847 "data_offset": 0, 00:26:15.847 "data_size": 0 00:26:15.847 } 00:26:15.847 ] 00:26:15.847 }' 00:26:15.847 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:15.847 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.104 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:16.362 [2024-05-16 07:39:09.787663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:16.362 [2024-05-16 07:39:09.787714] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b83ca00 00:26:16.362 [2024-05-16 07:39:09.787718] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:16.362 [2024-05-16 07:39:09.787735] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b89fec0 00:26:16.362 [2024-05-16 07:39:09.787773] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b83ca00 00:26:16.362 [2024-05-16 07:39:09.787776] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b83ca00 00:26:16.362 [2024-05-16 07:39:09.787802] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.362 BaseBdev4 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev4 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:16.362 07:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:16.620 07:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:16.879 [ 00:26:16.879 { 00:26:16.879 "name": "BaseBdev4", 00:26:16.879 "aliases": [ 00:26:16.879 "61ee79f0-1357-11ef-8e8f-9dd684e56d79" 00:26:16.879 ], 00:26:16.879 "product_name": "Malloc disk", 00:26:16.879 "block_size": 512, 00:26:16.879 "num_blocks": 65536, 00:26:16.879 "uuid": "61ee79f0-1357-11ef-8e8f-9dd684e56d79", 00:26:16.879 "assigned_rate_limits": { 00:26:16.879 "rw_ios_per_sec": 0, 00:26:16.879 "rw_mbytes_per_sec": 0, 00:26:16.879 "r_mbytes_per_sec": 0, 00:26:16.879 "w_mbytes_per_sec": 0 00:26:16.879 }, 00:26:16.879 "claimed": true, 00:26:16.879 "claim_type": "exclusive_write", 00:26:16.879 "zoned": false, 00:26:16.879 "supported_io_types": { 00:26:16.879 "read": true, 00:26:16.879 "write": true, 00:26:16.879 "unmap": true, 00:26:16.879 "write_zeroes": true, 00:26:16.879 "flush": true, 00:26:16.879 "reset": true, 00:26:16.879 "compare": false, 00:26:16.879 "compare_and_write": false, 00:26:16.879 "abort": true, 00:26:16.879 "nvme_admin": false, 00:26:16.879 "nvme_io": false 00:26:16.879 }, 00:26:16.879 "memory_domains": [ 00:26:16.879 { 00:26:16.879 "dma_device_id": "system", 00:26:16.879 "dma_device_type": 1 00:26:16.879 }, 00:26:16.879 { 00:26:16.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.879 "dma_device_type": 2 00:26:16.879 } 00:26:16.879 ], 00:26:16.879 "driver_specific": {} 00:26:16.879 } 00:26:16.879 ] 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.879 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.137 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:17.137 "name": "Existed_Raid", 00:26:17.137 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:17.137 "strip_size_kb": 0, 00:26:17.137 "state": "online", 00:26:17.137 "raid_level": "raid1", 00:26:17.137 "superblock": true, 00:26:17.137 "num_base_bdevs": 4, 00:26:17.137 "num_base_bdevs_discovered": 4, 00:26:17.137 "num_base_bdevs_operational": 4, 00:26:17.137 "base_bdevs_list": [ 00:26:17.137 { 00:26:17.137 "name": "BaseBdev1", 00:26:17.137 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:17.137 "is_configured": true, 00:26:17.137 "data_offset": 2048, 00:26:17.137 "data_size": 63488 00:26:17.137 }, 00:26:17.137 { 00:26:17.137 "name": "BaseBdev2", 00:26:17.137 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:17.137 "is_configured": true, 00:26:17.137 "data_offset": 2048, 00:26:17.137 "data_size": 63488 00:26:17.137 }, 00:26:17.137 { 00:26:17.137 "name": "BaseBdev3", 00:26:17.137 "uuid": "613feaec-1357-11ef-8e8f-9dd684e56d79", 00:26:17.137 "is_configured": true, 00:26:17.137 "data_offset": 2048, 00:26:17.137 "data_size": 63488 00:26:17.137 }, 00:26:17.137 { 00:26:17.137 "name": "BaseBdev4", 00:26:17.137 "uuid": "61ee79f0-1357-11ef-8e8f-9dd684e56d79", 00:26:17.137 "is_configured": true, 00:26:17.137 "data_offset": 2048, 00:26:17.137 "data_size": 63488 00:26:17.137 } 00:26:17.137 ] 00:26:17.137 }' 00:26:17.137 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:17.137 07:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:17.396 07:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:17.655 [2024-05-16 07:39:11.135632] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.655 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:17.655 "name": "Existed_Raid", 00:26:17.655 "aliases": [ 00:26:17.655 "6002893b-1357-11ef-8e8f-9dd684e56d79" 00:26:17.655 ], 00:26:17.655 "product_name": "Raid Volume", 00:26:17.655 "block_size": 512, 00:26:17.655 "num_blocks": 63488, 00:26:17.655 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:17.655 "assigned_rate_limits": { 00:26:17.655 "rw_ios_per_sec": 0, 00:26:17.655 "rw_mbytes_per_sec": 0, 00:26:17.655 "r_mbytes_per_sec": 0, 00:26:17.655 "w_mbytes_per_sec": 0 00:26:17.655 }, 00:26:17.655 "claimed": false, 00:26:17.655 "zoned": false, 00:26:17.655 "supported_io_types": { 00:26:17.655 "read": true, 00:26:17.655 "write": true, 00:26:17.655 "unmap": false, 00:26:17.655 "write_zeroes": true, 00:26:17.655 "flush": false, 00:26:17.655 "reset": true, 00:26:17.655 "compare": false, 00:26:17.655 "compare_and_write": false, 00:26:17.656 "abort": false, 00:26:17.656 "nvme_admin": false, 00:26:17.656 "nvme_io": false 00:26:17.656 }, 00:26:17.656 "memory_domains": [ 00:26:17.656 { 00:26:17.656 "dma_device_id": "system", 00:26:17.656 "dma_device_type": 1 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.656 "dma_device_type": 2 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "system", 00:26:17.656 "dma_device_type": 1 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.656 "dma_device_type": 2 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "system", 00:26:17.656 "dma_device_type": 1 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.656 "dma_device_type": 2 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "system", 00:26:17.656 "dma_device_type": 1 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.656 "dma_device_type": 2 00:26:17.656 } 00:26:17.656 ], 00:26:17.656 "driver_specific": { 00:26:17.656 "raid": { 00:26:17.656 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:17.656 "strip_size_kb": 0, 00:26:17.656 "state": "online", 00:26:17.656 "raid_level": "raid1", 00:26:17.656 "superblock": true, 00:26:17.656 "num_base_bdevs": 4, 00:26:17.656 "num_base_bdevs_discovered": 4, 00:26:17.656 "num_base_bdevs_operational": 4, 00:26:17.656 "base_bdevs_list": [ 00:26:17.656 { 00:26:17.656 "name": "BaseBdev1", 00:26:17.656 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:17.656 "is_configured": true, 00:26:17.656 "data_offset": 2048, 00:26:17.656 "data_size": 63488 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "name": "BaseBdev2", 00:26:17.656 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:17.656 "is_configured": true, 00:26:17.656 "data_offset": 2048, 00:26:17.656 "data_size": 63488 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "name": "BaseBdev3", 00:26:17.656 "uuid": "613feaec-1357-11ef-8e8f-9dd684e56d79", 00:26:17.656 "is_configured": true, 00:26:17.656 "data_offset": 2048, 00:26:17.656 "data_size": 63488 00:26:17.656 }, 00:26:17.656 { 00:26:17.656 "name": "BaseBdev4", 00:26:17.656 "uuid": "61ee79f0-1357-11ef-8e8f-9dd684e56d79", 00:26:17.656 "is_configured": true, 00:26:17.656 "data_offset": 2048, 00:26:17.656 "data_size": 63488 00:26:17.656 } 00:26:17.656 ] 00:26:17.656 } 00:26:17.656 } 00:26:17.656 }' 00:26:17.656 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.656 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:26:17.656 BaseBdev2 00:26:17.656 BaseBdev3 00:26:17.656 BaseBdev4' 00:26:17.656 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:17.656 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:17.656 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:17.914 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:17.914 "name": "BaseBdev1", 00:26:17.914 "aliases": [ 00:26:17.914 "5f0a1f55-1357-11ef-8e8f-9dd684e56d79" 00:26:17.914 ], 00:26:17.914 "product_name": "Malloc disk", 00:26:17.914 "block_size": 512, 00:26:17.914 "num_blocks": 65536, 00:26:17.914 "uuid": "5f0a1f55-1357-11ef-8e8f-9dd684e56d79", 00:26:17.914 "assigned_rate_limits": { 00:26:17.915 "rw_ios_per_sec": 0, 00:26:17.915 "rw_mbytes_per_sec": 0, 00:26:17.915 "r_mbytes_per_sec": 0, 00:26:17.915 "w_mbytes_per_sec": 0 00:26:17.915 }, 00:26:17.915 "claimed": true, 00:26:17.915 "claim_type": "exclusive_write", 00:26:17.915 "zoned": false, 00:26:17.915 "supported_io_types": { 00:26:17.915 "read": true, 00:26:17.915 "write": true, 00:26:17.915 "unmap": true, 00:26:17.915 "write_zeroes": true, 00:26:17.915 "flush": true, 00:26:17.915 "reset": true, 00:26:17.915 "compare": false, 00:26:17.915 "compare_and_write": false, 00:26:17.915 "abort": true, 00:26:17.915 "nvme_admin": false, 00:26:17.915 "nvme_io": false 00:26:17.915 }, 00:26:17.915 "memory_domains": [ 00:26:17.915 { 00:26:17.915 "dma_device_id": "system", 00:26:17.915 "dma_device_type": 1 00:26:17.915 }, 00:26:17.915 { 00:26:17.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.915 "dma_device_type": 2 00:26:17.915 } 00:26:17.915 ], 00:26:17.915 "driver_specific": {} 00:26:17.915 }' 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:17.915 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:18.172 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:18.172 "name": "BaseBdev2", 00:26:18.172 "aliases": [ 00:26:18.172 "6077b8ed-1357-11ef-8e8f-9dd684e56d79" 00:26:18.172 ], 00:26:18.172 "product_name": "Malloc disk", 00:26:18.172 "block_size": 512, 00:26:18.172 "num_blocks": 65536, 00:26:18.172 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:18.172 "assigned_rate_limits": { 00:26:18.172 "rw_ios_per_sec": 0, 00:26:18.172 "rw_mbytes_per_sec": 0, 00:26:18.172 "r_mbytes_per_sec": 0, 00:26:18.172 "w_mbytes_per_sec": 0 00:26:18.172 }, 00:26:18.172 "claimed": true, 00:26:18.172 "claim_type": "exclusive_write", 00:26:18.172 "zoned": false, 00:26:18.172 "supported_io_types": { 00:26:18.172 "read": true, 00:26:18.172 "write": true, 00:26:18.172 "unmap": true, 00:26:18.172 "write_zeroes": true, 00:26:18.172 "flush": true, 00:26:18.172 "reset": true, 00:26:18.172 "compare": false, 00:26:18.172 "compare_and_write": false, 00:26:18.172 "abort": true, 00:26:18.172 "nvme_admin": false, 00:26:18.172 "nvme_io": false 00:26:18.172 }, 00:26:18.172 "memory_domains": [ 00:26:18.172 { 00:26:18.172 "dma_device_id": "system", 00:26:18.172 "dma_device_type": 1 00:26:18.172 }, 00:26:18.172 { 00:26:18.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.172 "dma_device_type": 2 00:26:18.172 } 00:26:18.172 ], 00:26:18.172 "driver_specific": {} 00:26:18.172 }' 00:26:18.172 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:18.431 07:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:18.726 "name": "BaseBdev3", 00:26:18.726 "aliases": [ 00:26:18.726 "613feaec-1357-11ef-8e8f-9dd684e56d79" 00:26:18.726 ], 00:26:18.726 "product_name": "Malloc disk", 00:26:18.726 "block_size": 512, 00:26:18.726 "num_blocks": 65536, 00:26:18.726 "uuid": "613feaec-1357-11ef-8e8f-9dd684e56d79", 00:26:18.726 "assigned_rate_limits": { 00:26:18.726 "rw_ios_per_sec": 0, 00:26:18.726 "rw_mbytes_per_sec": 0, 00:26:18.726 "r_mbytes_per_sec": 0, 00:26:18.726 "w_mbytes_per_sec": 0 00:26:18.726 }, 00:26:18.726 "claimed": true, 00:26:18.726 "claim_type": "exclusive_write", 00:26:18.726 "zoned": false, 00:26:18.726 "supported_io_types": { 00:26:18.726 "read": true, 00:26:18.726 "write": true, 00:26:18.726 "unmap": true, 00:26:18.726 "write_zeroes": true, 00:26:18.726 "flush": true, 00:26:18.726 "reset": true, 00:26:18.726 "compare": false, 00:26:18.726 "compare_and_write": false, 00:26:18.726 "abort": true, 00:26:18.726 "nvme_admin": false, 00:26:18.726 "nvme_io": false 00:26:18.726 }, 00:26:18.726 "memory_domains": [ 00:26:18.726 { 00:26:18.726 "dma_device_id": "system", 00:26:18.726 "dma_device_type": 1 00:26:18.726 }, 00:26:18.726 { 00:26:18.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.726 "dma_device_type": 2 00:26:18.726 } 00:26:18.726 ], 00:26:18.726 "driver_specific": {} 00:26:18.726 }' 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:18.726 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:18.984 "name": "BaseBdev4", 00:26:18.984 "aliases": [ 00:26:18.984 "61ee79f0-1357-11ef-8e8f-9dd684e56d79" 00:26:18.984 ], 00:26:18.984 "product_name": "Malloc disk", 00:26:18.984 "block_size": 512, 00:26:18.984 "num_blocks": 65536, 00:26:18.984 "uuid": "61ee79f0-1357-11ef-8e8f-9dd684e56d79", 00:26:18.984 "assigned_rate_limits": { 00:26:18.984 "rw_ios_per_sec": 0, 00:26:18.984 "rw_mbytes_per_sec": 0, 00:26:18.984 "r_mbytes_per_sec": 0, 00:26:18.984 "w_mbytes_per_sec": 0 00:26:18.984 }, 00:26:18.984 "claimed": true, 00:26:18.984 "claim_type": "exclusive_write", 00:26:18.984 "zoned": false, 00:26:18.984 "supported_io_types": { 00:26:18.984 "read": true, 00:26:18.984 "write": true, 00:26:18.984 "unmap": true, 00:26:18.984 "write_zeroes": true, 00:26:18.984 "flush": true, 00:26:18.984 "reset": true, 00:26:18.984 "compare": false, 00:26:18.984 "compare_and_write": false, 00:26:18.984 "abort": true, 00:26:18.984 "nvme_admin": false, 00:26:18.984 "nvme_io": false 00:26:18.984 }, 00:26:18.984 "memory_domains": [ 00:26:18.984 { 00:26:18.984 "dma_device_id": "system", 00:26:18.984 "dma_device_type": 1 00:26:18.984 }, 00:26:18.984 { 00:26:18.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.984 "dma_device_type": 2 00:26:18.984 } 00:26:18.984 ], 00:26:18.984 "driver_specific": {} 00:26:18.984 }' 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:18.984 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:19.241 [2024-05-16 07:39:12.607603] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # local expected_state 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # case $1 in 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 0 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.241 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.499 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:19.499 "name": "Existed_Raid", 00:26:19.499 "uuid": "6002893b-1357-11ef-8e8f-9dd684e56d79", 00:26:19.499 "strip_size_kb": 0, 00:26:19.499 "state": "online", 00:26:19.499 "raid_level": "raid1", 00:26:19.499 "superblock": true, 00:26:19.499 "num_base_bdevs": 4, 00:26:19.499 "num_base_bdevs_discovered": 3, 00:26:19.499 "num_base_bdevs_operational": 3, 00:26:19.499 "base_bdevs_list": [ 00:26:19.499 { 00:26:19.499 "name": null, 00:26:19.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.499 "is_configured": false, 00:26:19.499 "data_offset": 2048, 00:26:19.499 "data_size": 63488 00:26:19.499 }, 00:26:19.499 { 00:26:19.499 "name": "BaseBdev2", 00:26:19.499 "uuid": "6077b8ed-1357-11ef-8e8f-9dd684e56d79", 00:26:19.499 "is_configured": true, 00:26:19.499 "data_offset": 2048, 00:26:19.499 "data_size": 63488 00:26:19.499 }, 00:26:19.499 { 00:26:19.499 "name": "BaseBdev3", 00:26:19.499 "uuid": "613feaec-1357-11ef-8e8f-9dd684e56d79", 00:26:19.499 "is_configured": true, 00:26:19.499 "data_offset": 2048, 00:26:19.499 "data_size": 63488 00:26:19.499 }, 00:26:19.499 { 00:26:19.499 "name": "BaseBdev4", 00:26:19.499 "uuid": "61ee79f0-1357-11ef-8e8f-9dd684e56d79", 00:26:19.499 "is_configured": true, 00:26:19.499 "data_offset": 2048, 00:26:19.499 "data_size": 63488 00:26:19.499 } 00:26:19.499 ] 00:26:19.499 }' 00:26:19.499 07:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:19.499 07:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.756 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:19.756 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:19.756 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.756 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:20.014 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:20.014 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:20.014 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:20.271 [2024-05-16 07:39:13.608371] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:20.271 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:20.271 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:20.271 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.271 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:20.528 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:20.528 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:20.528 07:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:20.786 [2024-05-16 07:39:14.149207] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:20.786 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:20.786 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:20.786 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.786 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:26:21.044 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:26:21.044 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:21.044 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:21.303 [2024-05-16 07:39:14.609965] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:21.303 [2024-05-16 07:39:14.610003] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.303 [2024-05-16 07:39:14.614805] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.303 [2024-05-16 07:39:14.614820] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.303 [2024-05-16 07:39:14.614824] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b83ca00 name Existed_Raid, state offline 00:26:21.303 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:21.303 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:21.303 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.303 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:26:21.560 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:26:21.560 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:26:21.560 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # '[' 4 -gt 2 ']' 00:26:21.560 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i = 1 )) 00:26:21.560 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:21.560 07:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:21.560 BaseBdev2 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev2 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:21.560 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:22.126 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:22.384 [ 00:26:22.384 { 00:26:22.384 "name": "BaseBdev2", 00:26:22.384 "aliases": [ 00:26:22.384 "6512cb7e-1357-11ef-8e8f-9dd684e56d79" 00:26:22.384 ], 00:26:22.384 "product_name": "Malloc disk", 00:26:22.384 "block_size": 512, 00:26:22.384 "num_blocks": 65536, 00:26:22.384 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:22.384 "assigned_rate_limits": { 00:26:22.384 "rw_ios_per_sec": 0, 00:26:22.384 "rw_mbytes_per_sec": 0, 00:26:22.384 "r_mbytes_per_sec": 0, 00:26:22.384 "w_mbytes_per_sec": 0 00:26:22.384 }, 00:26:22.384 "claimed": false, 00:26:22.384 "zoned": false, 00:26:22.384 "supported_io_types": { 00:26:22.384 "read": true, 00:26:22.384 "write": true, 00:26:22.384 "unmap": true, 00:26:22.384 "write_zeroes": true, 00:26:22.384 "flush": true, 00:26:22.384 "reset": true, 00:26:22.384 "compare": false, 00:26:22.384 "compare_and_write": false, 00:26:22.384 "abort": true, 00:26:22.384 "nvme_admin": false, 00:26:22.384 "nvme_io": false 00:26:22.384 }, 00:26:22.384 "memory_domains": [ 00:26:22.384 { 00:26:22.384 "dma_device_id": "system", 00:26:22.384 "dma_device_type": 1 00:26:22.384 }, 00:26:22.384 { 00:26:22.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.384 "dma_device_type": 2 00:26:22.384 } 00:26:22.384 ], 00:26:22.384 "driver_specific": {} 00:26:22.384 } 00:26:22.384 ] 00:26:22.384 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:22.384 07:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:22.384 07:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:22.384 07:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:22.643 BaseBdev3 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev3 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:22.643 07:39:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:22.901 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:23.159 [ 00:26:23.159 { 00:26:23.159 "name": "BaseBdev3", 00:26:23.159 "aliases": [ 00:26:23.159 "659b8310-1357-11ef-8e8f-9dd684e56d79" 00:26:23.159 ], 00:26:23.159 "product_name": "Malloc disk", 00:26:23.159 "block_size": 512, 00:26:23.159 "num_blocks": 65536, 00:26:23.159 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:23.159 "assigned_rate_limits": { 00:26:23.159 "rw_ios_per_sec": 0, 00:26:23.159 "rw_mbytes_per_sec": 0, 00:26:23.159 "r_mbytes_per_sec": 0, 00:26:23.159 "w_mbytes_per_sec": 0 00:26:23.159 }, 00:26:23.159 "claimed": false, 00:26:23.159 "zoned": false, 00:26:23.159 "supported_io_types": { 00:26:23.159 "read": true, 00:26:23.159 "write": true, 00:26:23.159 "unmap": true, 00:26:23.159 "write_zeroes": true, 00:26:23.159 "flush": true, 00:26:23.159 "reset": true, 00:26:23.159 "compare": false, 00:26:23.159 "compare_and_write": false, 00:26:23.159 "abort": true, 00:26:23.159 "nvme_admin": false, 00:26:23.159 "nvme_io": false 00:26:23.159 }, 00:26:23.159 "memory_domains": [ 00:26:23.159 { 00:26:23.159 "dma_device_id": "system", 00:26:23.159 "dma_device_type": 1 00:26:23.159 }, 00:26:23.159 { 00:26:23.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.159 "dma_device_type": 2 00:26:23.159 } 00:26:23.159 ], 00:26:23.159 "driver_specific": {} 00:26:23.159 } 00:26:23.159 ] 00:26:23.159 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:23.159 07:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:23.159 07:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:23.159 07:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:23.417 BaseBdev4 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # waitforbdev BaseBdev4 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:23.417 07:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:23.674 07:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:23.932 [ 00:26:23.932 { 00:26:23.932 "name": "BaseBdev4", 00:26:23.932 "aliases": [ 00:26:23.932 "66212dda-1357-11ef-8e8f-9dd684e56d79" 00:26:23.932 ], 00:26:23.932 "product_name": "Malloc disk", 00:26:23.932 "block_size": 512, 00:26:23.932 "num_blocks": 65536, 00:26:23.932 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:23.932 "assigned_rate_limits": { 00:26:23.932 "rw_ios_per_sec": 0, 00:26:23.932 "rw_mbytes_per_sec": 0, 00:26:23.932 "r_mbytes_per_sec": 0, 00:26:23.932 "w_mbytes_per_sec": 0 00:26:23.932 }, 00:26:23.932 "claimed": false, 00:26:23.932 "zoned": false, 00:26:23.932 "supported_io_types": { 00:26:23.932 "read": true, 00:26:23.932 "write": true, 00:26:23.932 "unmap": true, 00:26:23.932 "write_zeroes": true, 00:26:23.932 "flush": true, 00:26:23.932 "reset": true, 00:26:23.932 "compare": false, 00:26:23.932 "compare_and_write": false, 00:26:23.932 "abort": true, 00:26:23.932 "nvme_admin": false, 00:26:23.932 "nvme_io": false 00:26:23.932 }, 00:26:23.932 "memory_domains": [ 00:26:23.932 { 00:26:23.932 "dma_device_id": "system", 00:26:23.932 "dma_device_type": 1 00:26:23.932 }, 00:26:23.932 { 00:26:23.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.932 "dma_device_type": 2 00:26:23.932 } 00:26:23.932 ], 00:26:23.932 "driver_specific": {} 00:26:23.932 } 00:26:23.932 ] 00:26:23.932 07:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:23.932 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i++ )) 00:26:23.932 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # (( i < num_base_bdevs )) 00:26:23.932 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:24.189 [2024-05-16 07:39:17.686777] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:24.189 [2024-05-16 07:39:17.686826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:24.189 [2024-05-16 07:39:17.686833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:24.189 [2024-05-16 07:39:17.687273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:24.189 [2024-05-16 07:39:17.687284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.189 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.447 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:24.447 "name": "Existed_Raid", 00:26:24.447 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:24.447 "strip_size_kb": 0, 00:26:24.447 "state": "configuring", 00:26:24.447 "raid_level": "raid1", 00:26:24.447 "superblock": true, 00:26:24.447 "num_base_bdevs": 4, 00:26:24.447 "num_base_bdevs_discovered": 3, 00:26:24.447 "num_base_bdevs_operational": 4, 00:26:24.447 "base_bdevs_list": [ 00:26:24.447 { 00:26:24.447 "name": "BaseBdev1", 00:26:24.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.447 "is_configured": false, 00:26:24.447 "data_offset": 0, 00:26:24.447 "data_size": 0 00:26:24.447 }, 00:26:24.447 { 00:26:24.447 "name": "BaseBdev2", 00:26:24.447 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:24.447 "is_configured": true, 00:26:24.447 "data_offset": 2048, 00:26:24.447 "data_size": 63488 00:26:24.447 }, 00:26:24.447 { 00:26:24.447 "name": "BaseBdev3", 00:26:24.447 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:24.447 "is_configured": true, 00:26:24.447 "data_offset": 2048, 00:26:24.447 "data_size": 63488 00:26:24.447 }, 00:26:24.447 { 00:26:24.447 "name": "BaseBdev4", 00:26:24.447 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:24.447 "is_configured": true, 00:26:24.447 "data_offset": 2048, 00:26:24.447 "data_size": 63488 00:26:24.447 } 00:26:24.447 ] 00:26:24.447 }' 00:26:24.447 07:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:24.447 07:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.012 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:25.012 [2024-05-16 07:39:18.538806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:25.012 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:25.012 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:25.012 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:25.012 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.013 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.585 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:25.585 "name": "Existed_Raid", 00:26:25.585 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:25.585 "strip_size_kb": 0, 00:26:25.585 "state": "configuring", 00:26:25.585 "raid_level": "raid1", 00:26:25.585 "superblock": true, 00:26:25.585 "num_base_bdevs": 4, 00:26:25.586 "num_base_bdevs_discovered": 2, 00:26:25.586 "num_base_bdevs_operational": 4, 00:26:25.586 "base_bdevs_list": [ 00:26:25.586 { 00:26:25.586 "name": "BaseBdev1", 00:26:25.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.586 "is_configured": false, 00:26:25.586 "data_offset": 0, 00:26:25.586 "data_size": 0 00:26:25.586 }, 00:26:25.586 { 00:26:25.586 "name": null, 00:26:25.586 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:25.586 "is_configured": false, 00:26:25.586 "data_offset": 2048, 00:26:25.586 "data_size": 63488 00:26:25.586 }, 00:26:25.586 { 00:26:25.586 "name": "BaseBdev3", 00:26:25.586 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:25.586 "is_configured": true, 00:26:25.586 "data_offset": 2048, 00:26:25.586 "data_size": 63488 00:26:25.586 }, 00:26:25.586 { 00:26:25.586 "name": "BaseBdev4", 00:26:25.586 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:25.586 "is_configured": true, 00:26:25.586 "data_offset": 2048, 00:26:25.586 "data_size": 63488 00:26:25.586 } 00:26:25.586 ] 00:26:25.586 }' 00:26:25.586 07:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:25.586 07:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.852 07:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.852 07:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:26.121 07:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # [[ false == \f\a\l\s\e ]] 00:26:26.121 07:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:26.393 [2024-05-16 07:39:19.706914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:26.393 BaseBdev1 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # waitforbdev BaseBdev1 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:26.393 07:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:26.664 07:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:26.937 [ 00:26:26.937 { 00:26:26.937 "name": "BaseBdev1", 00:26:26.937 "aliases": [ 00:26:26.937 "67d807ad-1357-11ef-8e8f-9dd684e56d79" 00:26:26.937 ], 00:26:26.937 "product_name": "Malloc disk", 00:26:26.937 "block_size": 512, 00:26:26.937 "num_blocks": 65536, 00:26:26.937 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:26.937 "assigned_rate_limits": { 00:26:26.937 "rw_ios_per_sec": 0, 00:26:26.937 "rw_mbytes_per_sec": 0, 00:26:26.937 "r_mbytes_per_sec": 0, 00:26:26.937 "w_mbytes_per_sec": 0 00:26:26.937 }, 00:26:26.937 "claimed": true, 00:26:26.937 "claim_type": "exclusive_write", 00:26:26.937 "zoned": false, 00:26:26.937 "supported_io_types": { 00:26:26.937 "read": true, 00:26:26.937 "write": true, 00:26:26.937 "unmap": true, 00:26:26.937 "write_zeroes": true, 00:26:26.937 "flush": true, 00:26:26.937 "reset": true, 00:26:26.937 "compare": false, 00:26:26.937 "compare_and_write": false, 00:26:26.937 "abort": true, 00:26:26.937 "nvme_admin": false, 00:26:26.937 "nvme_io": false 00:26:26.937 }, 00:26:26.937 "memory_domains": [ 00:26:26.937 { 00:26:26.937 "dma_device_id": "system", 00:26:26.937 "dma_device_type": 1 00:26:26.937 }, 00:26:26.937 { 00:26:26.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.937 "dma_device_type": 2 00:26:26.937 } 00:26:26.937 ], 00:26:26.937 "driver_specific": {} 00:26:26.937 } 00:26:26.937 ] 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.937 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.200 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:27.200 "name": "Existed_Raid", 00:26:27.200 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:27.200 "strip_size_kb": 0, 00:26:27.200 "state": "configuring", 00:26:27.200 "raid_level": "raid1", 00:26:27.200 "superblock": true, 00:26:27.200 "num_base_bdevs": 4, 00:26:27.200 "num_base_bdevs_discovered": 3, 00:26:27.200 "num_base_bdevs_operational": 4, 00:26:27.200 "base_bdevs_list": [ 00:26:27.200 { 00:26:27.200 "name": "BaseBdev1", 00:26:27.200 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:27.200 "is_configured": true, 00:26:27.200 "data_offset": 2048, 00:26:27.200 "data_size": 63488 00:26:27.200 }, 00:26:27.200 { 00:26:27.200 "name": null, 00:26:27.200 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:27.200 "is_configured": false, 00:26:27.200 "data_offset": 2048, 00:26:27.200 "data_size": 63488 00:26:27.200 }, 00:26:27.200 { 00:26:27.200 "name": "BaseBdev3", 00:26:27.200 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:27.200 "is_configured": true, 00:26:27.200 "data_offset": 2048, 00:26:27.200 "data_size": 63488 00:26:27.200 }, 00:26:27.200 { 00:26:27.200 "name": "BaseBdev4", 00:26:27.200 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:27.200 "is_configured": true, 00:26:27.200 "data_offset": 2048, 00:26:27.200 "data_size": 63488 00:26:27.200 } 00:26:27.200 ] 00:26:27.200 }' 00:26:27.200 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:27.200 07:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.458 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.458 07:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:27.716 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:27.716 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:27.974 [2024-05-16 07:39:21.470833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.974 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.233 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.233 "name": "Existed_Raid", 00:26:28.233 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:28.233 "strip_size_kb": 0, 00:26:28.233 "state": "configuring", 00:26:28.233 "raid_level": "raid1", 00:26:28.233 "superblock": true, 00:26:28.233 "num_base_bdevs": 4, 00:26:28.233 "num_base_bdevs_discovered": 2, 00:26:28.233 "num_base_bdevs_operational": 4, 00:26:28.233 "base_bdevs_list": [ 00:26:28.233 { 00:26:28.233 "name": "BaseBdev1", 00:26:28.233 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:28.233 "is_configured": true, 00:26:28.233 "data_offset": 2048, 00:26:28.233 "data_size": 63488 00:26:28.233 }, 00:26:28.233 { 00:26:28.233 "name": null, 00:26:28.233 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:28.233 "is_configured": false, 00:26:28.233 "data_offset": 2048, 00:26:28.233 "data_size": 63488 00:26:28.233 }, 00:26:28.233 { 00:26:28.233 "name": null, 00:26:28.233 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:28.233 "is_configured": false, 00:26:28.233 "data_offset": 2048, 00:26:28.233 "data_size": 63488 00:26:28.233 }, 00:26:28.233 { 00:26:28.233 "name": "BaseBdev4", 00:26:28.233 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:28.233 "is_configured": true, 00:26:28.233 "data_offset": 2048, 00:26:28.233 "data_size": 63488 00:26:28.233 } 00:26:28.233 ] 00:26:28.233 }' 00:26:28.233 07:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.233 07:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.798 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.798 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:29.056 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # [[ false == \f\a\l\s\e ]] 00:26:29.056 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:29.314 [2024-05-16 07:39:22.611242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.314 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.572 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.572 "name": "Existed_Raid", 00:26:29.572 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:29.572 "strip_size_kb": 0, 00:26:29.572 "state": "configuring", 00:26:29.572 "raid_level": "raid1", 00:26:29.572 "superblock": true, 00:26:29.572 "num_base_bdevs": 4, 00:26:29.572 "num_base_bdevs_discovered": 3, 00:26:29.572 "num_base_bdevs_operational": 4, 00:26:29.572 "base_bdevs_list": [ 00:26:29.572 { 00:26:29.572 "name": "BaseBdev1", 00:26:29.572 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:29.572 "is_configured": true, 00:26:29.572 "data_offset": 2048, 00:26:29.572 "data_size": 63488 00:26:29.572 }, 00:26:29.572 { 00:26:29.572 "name": null, 00:26:29.572 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:29.572 "is_configured": false, 00:26:29.572 "data_offset": 2048, 00:26:29.572 "data_size": 63488 00:26:29.572 }, 00:26:29.572 { 00:26:29.572 "name": "BaseBdev3", 00:26:29.572 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:29.572 "is_configured": true, 00:26:29.572 "data_offset": 2048, 00:26:29.572 "data_size": 63488 00:26:29.572 }, 00:26:29.572 { 00:26:29.572 "name": "BaseBdev4", 00:26:29.572 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:29.572 "is_configured": true, 00:26:29.572 "data_offset": 2048, 00:26:29.572 "data_size": 63488 00:26:29.572 } 00:26:29.572 ] 00:26:29.572 }' 00:26:29.572 07:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.572 07:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.828 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:29.828 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.085 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@324 -- # [[ true == \t\r\u\e ]] 00:26:30.085 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:30.343 [2024-05-16 07:39:23.739223] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.343 07:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.601 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:30.601 "name": "Existed_Raid", 00:26:30.601 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:30.601 "strip_size_kb": 0, 00:26:30.601 "state": "configuring", 00:26:30.601 "raid_level": "raid1", 00:26:30.601 "superblock": true, 00:26:30.601 "num_base_bdevs": 4, 00:26:30.601 "num_base_bdevs_discovered": 2, 00:26:30.601 "num_base_bdevs_operational": 4, 00:26:30.601 "base_bdevs_list": [ 00:26:30.601 { 00:26:30.601 "name": null, 00:26:30.601 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:30.601 "is_configured": false, 00:26:30.601 "data_offset": 2048, 00:26:30.601 "data_size": 63488 00:26:30.601 }, 00:26:30.601 { 00:26:30.601 "name": null, 00:26:30.601 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:30.601 "is_configured": false, 00:26:30.602 "data_offset": 2048, 00:26:30.602 "data_size": 63488 00:26:30.602 }, 00:26:30.602 { 00:26:30.602 "name": "BaseBdev3", 00:26:30.602 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:30.602 "is_configured": true, 00:26:30.602 "data_offset": 2048, 00:26:30.602 "data_size": 63488 00:26:30.602 }, 00:26:30.602 { 00:26:30.602 "name": "BaseBdev4", 00:26:30.602 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:30.602 "is_configured": true, 00:26:30.602 "data_offset": 2048, 00:26:30.602 "data_size": 63488 00:26:30.602 } 00:26:30.602 ] 00:26:30.602 }' 00:26:30.602 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:30.602 07:39:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.177 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.177 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # [[ false == \f\a\l\s\e ]] 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:31.433 [2024-05-16 07:39:24.924000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:31.433 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.434 07:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.690 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:31.690 "name": "Existed_Raid", 00:26:31.690 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:31.690 "strip_size_kb": 0, 00:26:31.690 "state": "configuring", 00:26:31.690 "raid_level": "raid1", 00:26:31.690 "superblock": true, 00:26:31.690 "num_base_bdevs": 4, 00:26:31.690 "num_base_bdevs_discovered": 3, 00:26:31.690 "num_base_bdevs_operational": 4, 00:26:31.690 "base_bdevs_list": [ 00:26:31.690 { 00:26:31.690 "name": null, 00:26:31.690 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:31.690 "is_configured": false, 00:26:31.690 "data_offset": 2048, 00:26:31.690 "data_size": 63488 00:26:31.690 }, 00:26:31.690 { 00:26:31.690 "name": "BaseBdev2", 00:26:31.690 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:31.690 "is_configured": true, 00:26:31.690 "data_offset": 2048, 00:26:31.690 "data_size": 63488 00:26:31.690 }, 00:26:31.690 { 00:26:31.690 "name": "BaseBdev3", 00:26:31.690 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:31.690 "is_configured": true, 00:26:31.690 "data_offset": 2048, 00:26:31.690 "data_size": 63488 00:26:31.690 }, 00:26:31.690 { 00:26:31.690 "name": "BaseBdev4", 00:26:31.690 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:31.690 "is_configured": true, 00:26:31.690 "data_offset": 2048, 00:26:31.690 "data_size": 63488 00:26:31.690 } 00:26:31.690 ] 00:26:31.690 }' 00:26:31.690 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:31.690 07:39:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.947 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.947 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:32.205 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@332 -- # [[ true == \t\r\u\e ]] 00:26:32.205 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.205 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:32.463 07:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 67d807ad-1357-11ef-8e8f-9dd684e56d79 00:26:32.722 [2024-05-16 07:39:26.204093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:32.722 [2024-05-16 07:39:26.204142] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b83cf00 00:26:32.722 [2024-05-16 07:39:26.204147] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:32.722 [2024-05-16 07:39:26.204166] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b89fe20 00:26:32.722 [2024-05-16 07:39:26.204206] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b83cf00 00:26:32.722 [2024-05-16 07:39:26.204210] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b83cf00 00:26:32.722 [2024-05-16 07:39:26.204226] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.722 NewBaseBdev 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # waitforbdev NewBaseBdev 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:26:32.722 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:33.024 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:33.306 [ 00:26:33.306 { 00:26:33.306 "name": "NewBaseBdev", 00:26:33.306 "aliases": [ 00:26:33.306 "67d807ad-1357-11ef-8e8f-9dd684e56d79" 00:26:33.306 ], 00:26:33.306 "product_name": "Malloc disk", 00:26:33.306 "block_size": 512, 00:26:33.306 "num_blocks": 65536, 00:26:33.306 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:33.306 "assigned_rate_limits": { 00:26:33.306 "rw_ios_per_sec": 0, 00:26:33.306 "rw_mbytes_per_sec": 0, 00:26:33.306 "r_mbytes_per_sec": 0, 00:26:33.306 "w_mbytes_per_sec": 0 00:26:33.306 }, 00:26:33.306 "claimed": true, 00:26:33.306 "claim_type": "exclusive_write", 00:26:33.306 "zoned": false, 00:26:33.306 "supported_io_types": { 00:26:33.306 "read": true, 00:26:33.306 "write": true, 00:26:33.306 "unmap": true, 00:26:33.306 "write_zeroes": true, 00:26:33.306 "flush": true, 00:26:33.306 "reset": true, 00:26:33.306 "compare": false, 00:26:33.306 "compare_and_write": false, 00:26:33.306 "abort": true, 00:26:33.306 "nvme_admin": false, 00:26:33.306 "nvme_io": false 00:26:33.306 }, 00:26:33.306 "memory_domains": [ 00:26:33.306 { 00:26:33.306 "dma_device_id": "system", 00:26:33.306 "dma_device_type": 1 00:26:33.306 }, 00:26:33.306 { 00:26:33.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.306 "dma_device_type": 2 00:26:33.306 } 00:26:33.306 ], 00:26:33.306 "driver_specific": {} 00:26:33.306 } 00:26:33.306 ] 00:26:33.306 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:26:33.306 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:33.306 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:33.306 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:33.306 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:33.306 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.307 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.565 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.565 "name": "Existed_Raid", 00:26:33.565 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:33.565 "strip_size_kb": 0, 00:26:33.565 "state": "online", 00:26:33.565 "raid_level": "raid1", 00:26:33.565 "superblock": true, 00:26:33.565 "num_base_bdevs": 4, 00:26:33.565 "num_base_bdevs_discovered": 4, 00:26:33.565 "num_base_bdevs_operational": 4, 00:26:33.565 "base_bdevs_list": [ 00:26:33.565 { 00:26:33.565 "name": "NewBaseBdev", 00:26:33.565 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:33.565 "is_configured": true, 00:26:33.565 "data_offset": 2048, 00:26:33.565 "data_size": 63488 00:26:33.565 }, 00:26:33.565 { 00:26:33.565 "name": "BaseBdev2", 00:26:33.565 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:33.565 "is_configured": true, 00:26:33.565 "data_offset": 2048, 00:26:33.565 "data_size": 63488 00:26:33.565 }, 00:26:33.565 { 00:26:33.565 "name": "BaseBdev3", 00:26:33.565 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:33.565 "is_configured": true, 00:26:33.565 "data_offset": 2048, 00:26:33.565 "data_size": 63488 00:26:33.565 }, 00:26:33.565 { 00:26:33.565 "name": "BaseBdev4", 00:26:33.565 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:33.565 "is_configured": true, 00:26:33.565 "data_offset": 2048, 00:26:33.565 "data_size": 63488 00:26:33.565 } 00:26:33.565 ] 00:26:33.565 }' 00:26:33.565 07:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.565 07:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@337 -- # verify_raid_bdev_properties Existed_Raid 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # local name 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:33.823 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:34.081 [2024-05-16 07:39:27.456030] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:34.081 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:34.081 "name": "Existed_Raid", 00:26:34.081 "aliases": [ 00:26:34.081 "66a3cb87-1357-11ef-8e8f-9dd684e56d79" 00:26:34.081 ], 00:26:34.081 "product_name": "Raid Volume", 00:26:34.081 "block_size": 512, 00:26:34.081 "num_blocks": 63488, 00:26:34.081 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:34.081 "assigned_rate_limits": { 00:26:34.081 "rw_ios_per_sec": 0, 00:26:34.081 "rw_mbytes_per_sec": 0, 00:26:34.081 "r_mbytes_per_sec": 0, 00:26:34.081 "w_mbytes_per_sec": 0 00:26:34.081 }, 00:26:34.081 "claimed": false, 00:26:34.081 "zoned": false, 00:26:34.081 "supported_io_types": { 00:26:34.081 "read": true, 00:26:34.081 "write": true, 00:26:34.081 "unmap": false, 00:26:34.081 "write_zeroes": true, 00:26:34.081 "flush": false, 00:26:34.081 "reset": true, 00:26:34.081 "compare": false, 00:26:34.081 "compare_and_write": false, 00:26:34.081 "abort": false, 00:26:34.081 "nvme_admin": false, 00:26:34.082 "nvme_io": false 00:26:34.082 }, 00:26:34.082 "memory_domains": [ 00:26:34.082 { 00:26:34.082 "dma_device_id": "system", 00:26:34.082 "dma_device_type": 1 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.082 "dma_device_type": 2 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "system", 00:26:34.082 "dma_device_type": 1 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.082 "dma_device_type": 2 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "system", 00:26:34.082 "dma_device_type": 1 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.082 "dma_device_type": 2 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "system", 00:26:34.082 "dma_device_type": 1 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.082 "dma_device_type": 2 00:26:34.082 } 00:26:34.082 ], 00:26:34.082 "driver_specific": { 00:26:34.082 "raid": { 00:26:34.082 "uuid": "66a3cb87-1357-11ef-8e8f-9dd684e56d79", 00:26:34.082 "strip_size_kb": 0, 00:26:34.082 "state": "online", 00:26:34.082 "raid_level": "raid1", 00:26:34.082 "superblock": true, 00:26:34.082 "num_base_bdevs": 4, 00:26:34.082 "num_base_bdevs_discovered": 4, 00:26:34.082 "num_base_bdevs_operational": 4, 00:26:34.082 "base_bdevs_list": [ 00:26:34.082 { 00:26:34.082 "name": "NewBaseBdev", 00:26:34.082 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:34.082 "is_configured": true, 00:26:34.082 "data_offset": 2048, 00:26:34.082 "data_size": 63488 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "name": "BaseBdev2", 00:26:34.082 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:34.082 "is_configured": true, 00:26:34.082 "data_offset": 2048, 00:26:34.082 "data_size": 63488 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "name": "BaseBdev3", 00:26:34.082 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:34.082 "is_configured": true, 00:26:34.082 "data_offset": 2048, 00:26:34.082 "data_size": 63488 00:26:34.082 }, 00:26:34.082 { 00:26:34.082 "name": "BaseBdev4", 00:26:34.082 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:34.082 "is_configured": true, 00:26:34.082 "data_offset": 2048, 00:26:34.082 "data_size": 63488 00:26:34.082 } 00:26:34.082 ] 00:26:34.082 } 00:26:34.082 } 00:26:34.082 }' 00:26:34.082 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:34.082 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@202 -- # base_bdev_names='NewBaseBdev 00:26:34.082 BaseBdev2 00:26:34.082 BaseBdev3 00:26:34.082 BaseBdev4' 00:26:34.082 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:34.082 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:34.082 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:34.340 "name": "NewBaseBdev", 00:26:34.340 "aliases": [ 00:26:34.340 "67d807ad-1357-11ef-8e8f-9dd684e56d79" 00:26:34.340 ], 00:26:34.340 "product_name": "Malloc disk", 00:26:34.340 "block_size": 512, 00:26:34.340 "num_blocks": 65536, 00:26:34.340 "uuid": "67d807ad-1357-11ef-8e8f-9dd684e56d79", 00:26:34.340 "assigned_rate_limits": { 00:26:34.340 "rw_ios_per_sec": 0, 00:26:34.340 "rw_mbytes_per_sec": 0, 00:26:34.340 "r_mbytes_per_sec": 0, 00:26:34.340 "w_mbytes_per_sec": 0 00:26:34.340 }, 00:26:34.340 "claimed": true, 00:26:34.340 "claim_type": "exclusive_write", 00:26:34.340 "zoned": false, 00:26:34.340 "supported_io_types": { 00:26:34.340 "read": true, 00:26:34.340 "write": true, 00:26:34.340 "unmap": true, 00:26:34.340 "write_zeroes": true, 00:26:34.340 "flush": true, 00:26:34.340 "reset": true, 00:26:34.340 "compare": false, 00:26:34.340 "compare_and_write": false, 00:26:34.340 "abort": true, 00:26:34.340 "nvme_admin": false, 00:26:34.340 "nvme_io": false 00:26:34.340 }, 00:26:34.340 "memory_domains": [ 00:26:34.340 { 00:26:34.340 "dma_device_id": "system", 00:26:34.340 "dma_device_type": 1 00:26:34.340 }, 00:26:34.340 { 00:26:34.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.340 "dma_device_type": 2 00:26:34.340 } 00:26:34.340 ], 00:26:34.340 "driver_specific": {} 00:26:34.340 }' 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:34.340 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:34.599 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:34.599 "name": "BaseBdev2", 00:26:34.599 "aliases": [ 00:26:34.599 "6512cb7e-1357-11ef-8e8f-9dd684e56d79" 00:26:34.599 ], 00:26:34.599 "product_name": "Malloc disk", 00:26:34.599 "block_size": 512, 00:26:34.599 "num_blocks": 65536, 00:26:34.599 "uuid": "6512cb7e-1357-11ef-8e8f-9dd684e56d79", 00:26:34.599 "assigned_rate_limits": { 00:26:34.599 "rw_ios_per_sec": 0, 00:26:34.599 "rw_mbytes_per_sec": 0, 00:26:34.599 "r_mbytes_per_sec": 0, 00:26:34.599 "w_mbytes_per_sec": 0 00:26:34.599 }, 00:26:34.599 "claimed": true, 00:26:34.599 "claim_type": "exclusive_write", 00:26:34.599 "zoned": false, 00:26:34.599 "supported_io_types": { 00:26:34.599 "read": true, 00:26:34.599 "write": true, 00:26:34.599 "unmap": true, 00:26:34.599 "write_zeroes": true, 00:26:34.599 "flush": true, 00:26:34.599 "reset": true, 00:26:34.599 "compare": false, 00:26:34.599 "compare_and_write": false, 00:26:34.599 "abort": true, 00:26:34.599 "nvme_admin": false, 00:26:34.599 "nvme_io": false 00:26:34.599 }, 00:26:34.599 "memory_domains": [ 00:26:34.599 { 00:26:34.599 "dma_device_id": "system", 00:26:34.599 "dma_device_type": 1 00:26:34.599 }, 00:26:34.599 { 00:26:34.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.599 "dma_device_type": 2 00:26:34.599 } 00:26:34.599 ], 00:26:34.599 "driver_specific": {} 00:26:34.599 }' 00:26:34.599 07:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:34.599 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:34.859 "name": "BaseBdev3", 00:26:34.859 "aliases": [ 00:26:34.859 "659b8310-1357-11ef-8e8f-9dd684e56d79" 00:26:34.859 ], 00:26:34.859 "product_name": "Malloc disk", 00:26:34.859 "block_size": 512, 00:26:34.859 "num_blocks": 65536, 00:26:34.859 "uuid": "659b8310-1357-11ef-8e8f-9dd684e56d79", 00:26:34.859 "assigned_rate_limits": { 00:26:34.859 "rw_ios_per_sec": 0, 00:26:34.859 "rw_mbytes_per_sec": 0, 00:26:34.859 "r_mbytes_per_sec": 0, 00:26:34.859 "w_mbytes_per_sec": 0 00:26:34.859 }, 00:26:34.859 "claimed": true, 00:26:34.859 "claim_type": "exclusive_write", 00:26:34.859 "zoned": false, 00:26:34.859 "supported_io_types": { 00:26:34.859 "read": true, 00:26:34.859 "write": true, 00:26:34.859 "unmap": true, 00:26:34.859 "write_zeroes": true, 00:26:34.859 "flush": true, 00:26:34.859 "reset": true, 00:26:34.859 "compare": false, 00:26:34.859 "compare_and_write": false, 00:26:34.859 "abort": true, 00:26:34.859 "nvme_admin": false, 00:26:34.859 "nvme_io": false 00:26:34.859 }, 00:26:34.859 "memory_domains": [ 00:26:34.859 { 00:26:34.859 "dma_device_id": "system", 00:26:34.859 "dma_device_type": 1 00:26:34.859 }, 00:26:34.859 { 00:26:34.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.859 "dma_device_type": 2 00:26:34.859 } 00:26:34.859 ], 00:26:34.859 "driver_specific": {} 00:26:34.859 }' 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:34.859 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:35.425 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:35.425 "name": "BaseBdev4", 00:26:35.425 "aliases": [ 00:26:35.425 "66212dda-1357-11ef-8e8f-9dd684e56d79" 00:26:35.425 ], 00:26:35.425 "product_name": "Malloc disk", 00:26:35.425 "block_size": 512, 00:26:35.425 "num_blocks": 65536, 00:26:35.425 "uuid": "66212dda-1357-11ef-8e8f-9dd684e56d79", 00:26:35.425 "assigned_rate_limits": { 00:26:35.425 "rw_ios_per_sec": 0, 00:26:35.425 "rw_mbytes_per_sec": 0, 00:26:35.425 "r_mbytes_per_sec": 0, 00:26:35.425 "w_mbytes_per_sec": 0 00:26:35.425 }, 00:26:35.425 "claimed": true, 00:26:35.425 "claim_type": "exclusive_write", 00:26:35.425 "zoned": false, 00:26:35.425 "supported_io_types": { 00:26:35.425 "read": true, 00:26:35.425 "write": true, 00:26:35.425 "unmap": true, 00:26:35.425 "write_zeroes": true, 00:26:35.426 "flush": true, 00:26:35.426 "reset": true, 00:26:35.426 "compare": false, 00:26:35.426 "compare_and_write": false, 00:26:35.426 "abort": true, 00:26:35.426 "nvme_admin": false, 00:26:35.426 "nvme_io": false 00:26:35.426 }, 00:26:35.426 "memory_domains": [ 00:26:35.426 { 00:26:35.426 "dma_device_id": "system", 00:26:35.426 "dma_device_type": 1 00:26:35.426 }, 00:26:35.426 { 00:26:35.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.426 "dma_device_type": 2 00:26:35.426 } 00:26:35.426 ], 00:26:35.426 "driver_specific": {} 00:26:35.426 }' 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:35.426 07:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@339 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:35.683 [2024-05-16 07:39:29.047978] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:35.683 [2024-05-16 07:39:29.048003] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.683 [2024-05-16 07:39:29.048021] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.683 [2024-05-16 07:39:29.048088] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.683 [2024-05-16 07:39:29.048101] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b83cf00 name Existed_Raid, state offline 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@342 -- # killprocess 62509 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 62509 ']' 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 62509 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps -c -o command 62509 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # tail -1 00:26:35.683 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:26:35.683 killing process with pid 62509 00:26:35.684 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:26:35.684 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62509' 00:26:35.684 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 62509 00:26:35.684 [2024-05-16 07:39:29.085353] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:35.684 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 62509 00:26:35.684 [2024-05-16 07:39:29.104517] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:35.942 07:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@344 -- # return 0 00:26:35.942 00:26:35.942 real 0m26.806s 00:26:35.942 user 0m49.103s 00:26:35.942 sys 0m3.720s 00:26:35.942 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:35.942 07:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:35.942 ************************************ 00:26:35.942 END TEST raid_state_function_test_sb 00:26:35.942 ************************************ 00:26:35.942 07:39:29 bdev_raid -- bdev/bdev_raid.sh@805 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:35.942 07:39:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:26:35.942 07:39:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:35.942 07:39:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:35.942 ************************************ 00:26:35.942 START TEST raid_superblock_test 00:26:35.942 ************************************ 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:26:35.942 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63327 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63327 /var/tmp/spdk-raid.sock 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 63327 ']' 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:35.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:35.943 07:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.943 [2024-05-16 07:39:29.332546] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:35.943 [2024-05-16 07:39:29.332902] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:26:36.509 EAL: TSC is not safe to use in SMP mode 00:26:36.509 EAL: TSC is not invariant 00:26:36.509 [2024-05-16 07:39:29.813486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.509 [2024-05-16 07:39:29.908118] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:36.509 [2024-05-16 07:39:29.910712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.509 [2024-05-16 07:39:29.911607] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:36.509 [2024-05-16 07:39:29.911626] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:37.075 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:37.333 malloc1 00:26:37.333 07:39:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:37.591 [2024-05-16 07:39:31.035586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:37.591 [2024-05-16 07:39:31.035643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.591 [2024-05-16 07:39:31.036235] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d0780 00:26:37.591 [2024-05-16 07:39:31.036262] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.591 [2024-05-16 07:39:31.037075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.591 [2024-05-16 07:39:31.037106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:37.591 pt1 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:37.591 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:37.849 malloc2 00:26:37.849 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:38.107 [2024-05-16 07:39:31.555565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:38.107 [2024-05-16 07:39:31.555621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.107 [2024-05-16 07:39:31.555661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d0c80 00:26:38.107 [2024-05-16 07:39:31.555669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.107 [2024-05-16 07:39:31.556143] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.107 [2024-05-16 07:39:31.556166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:38.107 pt2 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:38.107 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:38.364 malloc3 00:26:38.364 07:39:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:38.621 [2024-05-16 07:39:32.047560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:38.621 [2024-05-16 07:39:32.047610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.621 [2024-05-16 07:39:32.047633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1180 00:26:38.621 [2024-05-16 07:39:32.047640] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.621 [2024-05-16 07:39:32.048074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.621 [2024-05-16 07:39:32.048102] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:38.621 pt3 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:38.621 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:38.622 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:38.879 malloc4 00:26:38.879 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:39.138 [2024-05-16 07:39:32.543791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:39.138 [2024-05-16 07:39:32.543860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.138 [2024-05-16 07:39:32.543887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1680 00:26:39.138 [2024-05-16 07:39:32.543895] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.138 [2024-05-16 07:39:32.544428] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.138 [2024-05-16 07:39:32.544458] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:39.138 pt4 00:26:39.138 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:39.138 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:39.138 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:39.395 [2024-05-16 07:39:32.747778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:39.395 [2024-05-16 07:39:32.748185] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.395 [2024-05-16 07:39:32.748215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:39.396 [2024-05-16 07:39:32.748223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:39.396 [2024-05-16 07:39:32.748265] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c3d1900 00:26:39.396 [2024-05-16 07:39:32.748270] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:39.396 [2024-05-16 07:39:32.748297] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c433e20 00:26:39.396 [2024-05-16 07:39:32.748353] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c3d1900 00:26:39.396 [2024-05-16 07:39:32.748356] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c3d1900 00:26:39.396 [2024-05-16 07:39:32.748377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.396 07:39:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.653 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:39.653 "name": "raid_bdev1", 00:26:39.653 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:39.653 "strip_size_kb": 0, 00:26:39.653 "state": "online", 00:26:39.653 "raid_level": "raid1", 00:26:39.653 "superblock": true, 00:26:39.653 "num_base_bdevs": 4, 00:26:39.653 "num_base_bdevs_discovered": 4, 00:26:39.653 "num_base_bdevs_operational": 4, 00:26:39.653 "base_bdevs_list": [ 00:26:39.653 { 00:26:39.653 "name": "pt1", 00:26:39.653 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:39.653 "is_configured": true, 00:26:39.653 "data_offset": 2048, 00:26:39.653 "data_size": 63488 00:26:39.653 }, 00:26:39.653 { 00:26:39.653 "name": "pt2", 00:26:39.653 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:39.653 "is_configured": true, 00:26:39.653 "data_offset": 2048, 00:26:39.653 "data_size": 63488 00:26:39.653 }, 00:26:39.653 { 00:26:39.653 "name": "pt3", 00:26:39.653 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:39.653 "is_configured": true, 00:26:39.653 "data_offset": 2048, 00:26:39.653 "data_size": 63488 00:26:39.653 }, 00:26:39.653 { 00:26:39.653 "name": "pt4", 00:26:39.653 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:39.653 "is_configured": true, 00:26:39.653 "data_offset": 2048, 00:26:39.653 "data_size": 63488 00:26:39.653 } 00:26:39.653 ] 00:26:39.653 }' 00:26:39.654 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:39.654 07:39:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:39.912 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:40.171 [2024-05-16 07:39:33.587852] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.171 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:40.171 "name": "raid_bdev1", 00:26:40.171 "aliases": [ 00:26:40.171 "6f9debcd-1357-11ef-8e8f-9dd684e56d79" 00:26:40.171 ], 00:26:40.171 "product_name": "Raid Volume", 00:26:40.171 "block_size": 512, 00:26:40.171 "num_blocks": 63488, 00:26:40.171 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:40.171 "assigned_rate_limits": { 00:26:40.171 "rw_ios_per_sec": 0, 00:26:40.171 "rw_mbytes_per_sec": 0, 00:26:40.171 "r_mbytes_per_sec": 0, 00:26:40.171 "w_mbytes_per_sec": 0 00:26:40.171 }, 00:26:40.171 "claimed": false, 00:26:40.171 "zoned": false, 00:26:40.171 "supported_io_types": { 00:26:40.171 "read": true, 00:26:40.171 "write": true, 00:26:40.171 "unmap": false, 00:26:40.171 "write_zeroes": true, 00:26:40.171 "flush": false, 00:26:40.171 "reset": true, 00:26:40.171 "compare": false, 00:26:40.171 "compare_and_write": false, 00:26:40.171 "abort": false, 00:26:40.171 "nvme_admin": false, 00:26:40.171 "nvme_io": false 00:26:40.171 }, 00:26:40.171 "memory_domains": [ 00:26:40.171 { 00:26:40.171 "dma_device_id": "system", 00:26:40.171 "dma_device_type": 1 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.171 "dma_device_type": 2 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "system", 00:26:40.171 "dma_device_type": 1 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.171 "dma_device_type": 2 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "system", 00:26:40.171 "dma_device_type": 1 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.171 "dma_device_type": 2 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "system", 00:26:40.171 "dma_device_type": 1 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.171 "dma_device_type": 2 00:26:40.171 } 00:26:40.171 ], 00:26:40.171 "driver_specific": { 00:26:40.171 "raid": { 00:26:40.171 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:40.171 "strip_size_kb": 0, 00:26:40.171 "state": "online", 00:26:40.171 "raid_level": "raid1", 00:26:40.171 "superblock": true, 00:26:40.171 "num_base_bdevs": 4, 00:26:40.171 "num_base_bdevs_discovered": 4, 00:26:40.171 "num_base_bdevs_operational": 4, 00:26:40.171 "base_bdevs_list": [ 00:26:40.171 { 00:26:40.171 "name": "pt1", 00:26:40.171 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:40.171 "is_configured": true, 00:26:40.171 "data_offset": 2048, 00:26:40.171 "data_size": 63488 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "name": "pt2", 00:26:40.171 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:40.171 "is_configured": true, 00:26:40.171 "data_offset": 2048, 00:26:40.171 "data_size": 63488 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "name": "pt3", 00:26:40.171 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:40.171 "is_configured": true, 00:26:40.171 "data_offset": 2048, 00:26:40.171 "data_size": 63488 00:26:40.171 }, 00:26:40.171 { 00:26:40.171 "name": "pt4", 00:26:40.171 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:40.171 "is_configured": true, 00:26:40.171 "data_offset": 2048, 00:26:40.171 "data_size": 63488 00:26:40.171 } 00:26:40.171 ] 00:26:40.171 } 00:26:40.171 } 00:26:40.171 }' 00:26:40.171 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:40.171 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:26:40.171 pt2 00:26:40.171 pt3 00:26:40.171 pt4' 00:26:40.171 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:40.171 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:40.171 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:40.429 "name": "pt1", 00:26:40.429 "aliases": [ 00:26:40.429 "5a642c01-0990-af58-b4ce-e45021b92a83" 00:26:40.429 ], 00:26:40.429 "product_name": "passthru", 00:26:40.429 "block_size": 512, 00:26:40.429 "num_blocks": 65536, 00:26:40.429 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:40.429 "assigned_rate_limits": { 00:26:40.429 "rw_ios_per_sec": 0, 00:26:40.429 "rw_mbytes_per_sec": 0, 00:26:40.429 "r_mbytes_per_sec": 0, 00:26:40.429 "w_mbytes_per_sec": 0 00:26:40.429 }, 00:26:40.429 "claimed": true, 00:26:40.429 "claim_type": "exclusive_write", 00:26:40.429 "zoned": false, 00:26:40.429 "supported_io_types": { 00:26:40.429 "read": true, 00:26:40.429 "write": true, 00:26:40.429 "unmap": true, 00:26:40.429 "write_zeroes": true, 00:26:40.429 "flush": true, 00:26:40.429 "reset": true, 00:26:40.429 "compare": false, 00:26:40.429 "compare_and_write": false, 00:26:40.429 "abort": true, 00:26:40.429 "nvme_admin": false, 00:26:40.429 "nvme_io": false 00:26:40.429 }, 00:26:40.429 "memory_domains": [ 00:26:40.429 { 00:26:40.429 "dma_device_id": "system", 00:26:40.429 "dma_device_type": 1 00:26:40.429 }, 00:26:40.429 { 00:26:40.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.429 "dma_device_type": 2 00:26:40.429 } 00:26:40.429 ], 00:26:40.429 "driver_specific": { 00:26:40.429 "passthru": { 00:26:40.429 "name": "pt1", 00:26:40.429 "base_bdev_name": "malloc1" 00:26:40.429 } 00:26:40.429 } 00:26:40.429 }' 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:40.429 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:40.687 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:40.687 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:40.687 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:40.687 07:39:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:40.945 "name": "pt2", 00:26:40.945 "aliases": [ 00:26:40.945 "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6" 00:26:40.945 ], 00:26:40.945 "product_name": "passthru", 00:26:40.945 "block_size": 512, 00:26:40.945 "num_blocks": 65536, 00:26:40.945 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:40.945 "assigned_rate_limits": { 00:26:40.945 "rw_ios_per_sec": 0, 00:26:40.945 "rw_mbytes_per_sec": 0, 00:26:40.945 "r_mbytes_per_sec": 0, 00:26:40.945 "w_mbytes_per_sec": 0 00:26:40.945 }, 00:26:40.945 "claimed": true, 00:26:40.945 "claim_type": "exclusive_write", 00:26:40.945 "zoned": false, 00:26:40.945 "supported_io_types": { 00:26:40.945 "read": true, 00:26:40.945 "write": true, 00:26:40.945 "unmap": true, 00:26:40.945 "write_zeroes": true, 00:26:40.945 "flush": true, 00:26:40.945 "reset": true, 00:26:40.945 "compare": false, 00:26:40.945 "compare_and_write": false, 00:26:40.945 "abort": true, 00:26:40.945 "nvme_admin": false, 00:26:40.945 "nvme_io": false 00:26:40.945 }, 00:26:40.945 "memory_domains": [ 00:26:40.945 { 00:26:40.945 "dma_device_id": "system", 00:26:40.945 "dma_device_type": 1 00:26:40.945 }, 00:26:40.945 { 00:26:40.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.945 "dma_device_type": 2 00:26:40.945 } 00:26:40.945 ], 00:26:40.945 "driver_specific": { 00:26:40.945 "passthru": { 00:26:40.945 "name": "pt2", 00:26:40.945 "base_bdev_name": "malloc2" 00:26:40.945 } 00:26:40.945 } 00:26:40.945 }' 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:40.945 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:41.204 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:41.204 "name": "pt3", 00:26:41.204 "aliases": [ 00:26:41.204 "7c02a9b5-56eb-405c-ab55-6113bb6c387f" 00:26:41.204 ], 00:26:41.204 "product_name": "passthru", 00:26:41.204 "block_size": 512, 00:26:41.204 "num_blocks": 65536, 00:26:41.204 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:41.204 "assigned_rate_limits": { 00:26:41.205 "rw_ios_per_sec": 0, 00:26:41.205 "rw_mbytes_per_sec": 0, 00:26:41.205 "r_mbytes_per_sec": 0, 00:26:41.205 "w_mbytes_per_sec": 0 00:26:41.205 }, 00:26:41.205 "claimed": true, 00:26:41.205 "claim_type": "exclusive_write", 00:26:41.205 "zoned": false, 00:26:41.205 "supported_io_types": { 00:26:41.205 "read": true, 00:26:41.205 "write": true, 00:26:41.205 "unmap": true, 00:26:41.205 "write_zeroes": true, 00:26:41.205 "flush": true, 00:26:41.205 "reset": true, 00:26:41.205 "compare": false, 00:26:41.205 "compare_and_write": false, 00:26:41.205 "abort": true, 00:26:41.205 "nvme_admin": false, 00:26:41.205 "nvme_io": false 00:26:41.205 }, 00:26:41.205 "memory_domains": [ 00:26:41.205 { 00:26:41.205 "dma_device_id": "system", 00:26:41.205 "dma_device_type": 1 00:26:41.205 }, 00:26:41.205 { 00:26:41.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.205 "dma_device_type": 2 00:26:41.205 } 00:26:41.205 ], 00:26:41.205 "driver_specific": { 00:26:41.205 "passthru": { 00:26:41.205 "name": "pt3", 00:26:41.205 "base_bdev_name": "malloc3" 00:26:41.205 } 00:26:41.205 } 00:26:41.205 }' 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:41.205 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:41.463 "name": "pt4", 00:26:41.463 "aliases": [ 00:26:41.463 "7506cacc-77e3-4150-a909-7afe04ed0217" 00:26:41.463 ], 00:26:41.463 "product_name": "passthru", 00:26:41.463 "block_size": 512, 00:26:41.463 "num_blocks": 65536, 00:26:41.463 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:41.463 "assigned_rate_limits": { 00:26:41.463 "rw_ios_per_sec": 0, 00:26:41.463 "rw_mbytes_per_sec": 0, 00:26:41.463 "r_mbytes_per_sec": 0, 00:26:41.463 "w_mbytes_per_sec": 0 00:26:41.463 }, 00:26:41.463 "claimed": true, 00:26:41.463 "claim_type": "exclusive_write", 00:26:41.463 "zoned": false, 00:26:41.463 "supported_io_types": { 00:26:41.463 "read": true, 00:26:41.463 "write": true, 00:26:41.463 "unmap": true, 00:26:41.463 "write_zeroes": true, 00:26:41.463 "flush": true, 00:26:41.463 "reset": true, 00:26:41.463 "compare": false, 00:26:41.463 "compare_and_write": false, 00:26:41.463 "abort": true, 00:26:41.463 "nvme_admin": false, 00:26:41.463 "nvme_io": false 00:26:41.463 }, 00:26:41.463 "memory_domains": [ 00:26:41.463 { 00:26:41.463 "dma_device_id": "system", 00:26:41.463 "dma_device_type": 1 00:26:41.463 }, 00:26:41.463 { 00:26:41.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.463 "dma_device_type": 2 00:26:41.463 } 00:26:41.463 ], 00:26:41.463 "driver_specific": { 00:26:41.463 "passthru": { 00:26:41.463 "name": "pt4", 00:26:41.463 "base_bdev_name": "malloc4" 00:26:41.463 } 00:26:41.463 } 00:26:41.463 }' 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:41.463 07:39:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:41.721 [2024-05-16 07:39:35.195880] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.721 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6f9debcd-1357-11ef-8e8f-9dd684e56d79 00:26:41.721 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6f9debcd-1357-11ef-8e8f-9dd684e56d79 ']' 00:26:41.721 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:41.979 [2024-05-16 07:39:35.419836] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:41.979 [2024-05-16 07:39:35.419858] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:41.979 [2024-05-16 07:39:35.419874] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:41.979 [2024-05-16 07:39:35.419891] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:41.979 [2024-05-16 07:39:35.419895] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c3d1900 name raid_bdev1, state offline 00:26:41.979 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.979 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:42.238 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:42.238 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:42.238 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:42.238 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:42.497 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:42.497 07:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:42.755 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:42.755 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:43.014 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:43.014 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:43.272 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:43.272 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:43.531 07:39:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:43.790 [2024-05-16 07:39:37.087853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:43.790 [2024-05-16 07:39:37.088300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:43.790 [2024-05-16 07:39:37.088353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:43.790 [2024-05-16 07:39:37.088361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:43.790 [2024-05-16 07:39:37.088391] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:43.790 [2024-05-16 07:39:37.088441] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:43.790 [2024-05-16 07:39:37.088450] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:43.790 [2024-05-16 07:39:37.088459] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:43.790 [2024-05-16 07:39:37.088467] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:43.790 [2024-05-16 07:39:37.088471] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c3d1680 name raid_bdev1, state configuring 00:26:43.790 request: 00:26:43.790 { 00:26:43.790 "name": "raid_bdev1", 00:26:43.790 "raid_level": "raid1", 00:26:43.790 "base_bdevs": [ 00:26:43.790 "malloc1", 00:26:43.790 "malloc2", 00:26:43.790 "malloc3", 00:26:43.790 "malloc4" 00:26:43.790 ], 00:26:43.790 "superblock": false, 00:26:43.790 "method": "bdev_raid_create", 00:26:43.790 "req_id": 1 00:26:43.790 } 00:26:43.790 Got JSON-RPC error response 00:26:43.790 response: 00:26:43.790 { 00:26:43.790 "code": -17, 00:26:43.790 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:43.790 } 00:26:43.790 07:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:26:43.790 07:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:43.790 07:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:43.790 07:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:43.790 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.790 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:44.072 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:44.072 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:44.072 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:44.344 [2024-05-16 07:39:37.683848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:44.344 [2024-05-16 07:39:37.683897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.344 [2024-05-16 07:39:37.683920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1180 00:26:44.344 [2024-05-16 07:39:37.683928] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.344 [2024-05-16 07:39:37.684437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.344 [2024-05-16 07:39:37.684472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:44.344 [2024-05-16 07:39:37.684492] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:44.344 [2024-05-16 07:39:37.684502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:44.344 pt1 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.344 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.603 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:44.603 "name": "raid_bdev1", 00:26:44.603 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:44.603 "strip_size_kb": 0, 00:26:44.603 "state": "configuring", 00:26:44.603 "raid_level": "raid1", 00:26:44.603 "superblock": true, 00:26:44.603 "num_base_bdevs": 4, 00:26:44.603 "num_base_bdevs_discovered": 1, 00:26:44.603 "num_base_bdevs_operational": 4, 00:26:44.603 "base_bdevs_list": [ 00:26:44.603 { 00:26:44.603 "name": "pt1", 00:26:44.603 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:44.603 "is_configured": true, 00:26:44.603 "data_offset": 2048, 00:26:44.603 "data_size": 63488 00:26:44.603 }, 00:26:44.603 { 00:26:44.603 "name": null, 00:26:44.603 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:44.603 "is_configured": false, 00:26:44.603 "data_offset": 2048, 00:26:44.604 "data_size": 63488 00:26:44.604 }, 00:26:44.604 { 00:26:44.604 "name": null, 00:26:44.604 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:44.604 "is_configured": false, 00:26:44.604 "data_offset": 2048, 00:26:44.604 "data_size": 63488 00:26:44.604 }, 00:26:44.604 { 00:26:44.604 "name": null, 00:26:44.604 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:44.604 "is_configured": false, 00:26:44.604 "data_offset": 2048, 00:26:44.604 "data_size": 63488 00:26:44.604 } 00:26:44.604 ] 00:26:44.604 }' 00:26:44.604 07:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:44.604 07:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.863 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:26:44.863 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:45.122 [2024-05-16 07:39:38.615846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:45.123 [2024-05-16 07:39:38.615895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.123 [2024-05-16 07:39:38.615918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d0780 00:26:45.123 [2024-05-16 07:39:38.615924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.123 [2024-05-16 07:39:38.616000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.123 [2024-05-16 07:39:38.616007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:45.123 [2024-05-16 07:39:38.616042] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:45.123 [2024-05-16 07:39:38.616048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:45.123 pt2 00:26:45.123 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:45.381 [2024-05-16 07:39:38.863849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:45.381 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:45.381 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.382 07:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.639 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:45.639 "name": "raid_bdev1", 00:26:45.639 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:45.639 "strip_size_kb": 0, 00:26:45.639 "state": "configuring", 00:26:45.639 "raid_level": "raid1", 00:26:45.639 "superblock": true, 00:26:45.639 "num_base_bdevs": 4, 00:26:45.639 "num_base_bdevs_discovered": 1, 00:26:45.639 "num_base_bdevs_operational": 4, 00:26:45.639 "base_bdevs_list": [ 00:26:45.639 { 00:26:45.639 "name": "pt1", 00:26:45.639 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:45.639 "is_configured": true, 00:26:45.639 "data_offset": 2048, 00:26:45.639 "data_size": 63488 00:26:45.639 }, 00:26:45.639 { 00:26:45.639 "name": null, 00:26:45.639 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:45.639 "is_configured": false, 00:26:45.639 "data_offset": 2048, 00:26:45.639 "data_size": 63488 00:26:45.639 }, 00:26:45.639 { 00:26:45.639 "name": null, 00:26:45.639 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:45.639 "is_configured": false, 00:26:45.639 "data_offset": 2048, 00:26:45.639 "data_size": 63488 00:26:45.639 }, 00:26:45.639 { 00:26:45.639 "name": null, 00:26:45.639 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:45.639 "is_configured": false, 00:26:45.639 "data_offset": 2048, 00:26:45.639 "data_size": 63488 00:26:45.639 } 00:26:45.639 ] 00:26:45.639 }' 00:26:45.639 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:45.639 07:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.202 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:46.202 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:46.202 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:46.459 [2024-05-16 07:39:39.759840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:46.459 [2024-05-16 07:39:39.759886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.459 [2024-05-16 07:39:39.759908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d0780 00:26:46.459 [2024-05-16 07:39:39.759915] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.459 [2024-05-16 07:39:39.759991] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.459 [2024-05-16 07:39:39.759999] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:46.459 [2024-05-16 07:39:39.760014] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:46.459 [2024-05-16 07:39:39.760021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:46.459 pt2 00:26:46.459 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:46.459 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:46.459 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:46.459 [2024-05-16 07:39:39.959856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:46.459 [2024-05-16 07:39:39.959900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.459 [2024-05-16 07:39:39.959920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1b80 00:26:46.459 [2024-05-16 07:39:39.959927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.459 [2024-05-16 07:39:39.960001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.459 [2024-05-16 07:39:39.960010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:46.459 [2024-05-16 07:39:39.960025] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:46.459 [2024-05-16 07:39:39.960032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:46.459 pt3 00:26:46.459 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:46.459 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:46.459 07:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:46.716 [2024-05-16 07:39:40.247861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:46.716 [2024-05-16 07:39:40.247897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.716 [2024-05-16 07:39:40.247914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1900 00:26:46.716 [2024-05-16 07:39:40.247921] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.716 [2024-05-16 07:39:40.247985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.716 [2024-05-16 07:39:40.247993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:46.716 [2024-05-16 07:39:40.248028] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:46.716 [2024-05-16 07:39:40.248034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:46.716 [2024-05-16 07:39:40.248057] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c3d0c80 00:26:46.716 [2024-05-16 07:39:40.248061] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:46.716 [2024-05-16 07:39:40.248078] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c433e20 00:26:46.716 [2024-05-16 07:39:40.248120] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c3d0c80 00:26:46.716 [2024-05-16 07:39:40.248123] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c3d0c80 00:26:46.716 [2024-05-16 07:39:40.248139] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.716 pt4 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:46.716 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:46.973 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.973 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.229 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:47.229 "name": "raid_bdev1", 00:26:47.229 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:47.229 "strip_size_kb": 0, 00:26:47.229 "state": "online", 00:26:47.229 "raid_level": "raid1", 00:26:47.229 "superblock": true, 00:26:47.229 "num_base_bdevs": 4, 00:26:47.229 "num_base_bdevs_discovered": 4, 00:26:47.229 "num_base_bdevs_operational": 4, 00:26:47.229 "base_bdevs_list": [ 00:26:47.229 { 00:26:47.229 "name": "pt1", 00:26:47.229 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:47.229 "is_configured": true, 00:26:47.229 "data_offset": 2048, 00:26:47.229 "data_size": 63488 00:26:47.229 }, 00:26:47.229 { 00:26:47.229 "name": "pt2", 00:26:47.229 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:47.229 "is_configured": true, 00:26:47.229 "data_offset": 2048, 00:26:47.229 "data_size": 63488 00:26:47.229 }, 00:26:47.229 { 00:26:47.229 "name": "pt3", 00:26:47.229 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:47.229 "is_configured": true, 00:26:47.229 "data_offset": 2048, 00:26:47.229 "data_size": 63488 00:26:47.229 }, 00:26:47.229 { 00:26:47.229 "name": "pt4", 00:26:47.229 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:47.229 "is_configured": true, 00:26:47.229 "data_offset": 2048, 00:26:47.229 "data_size": 63488 00:26:47.229 } 00:26:47.229 ] 00:26:47.229 }' 00:26:47.229 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:47.229 07:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # local name 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:26:47.487 07:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:47.744 [2024-05-16 07:39:41.083902] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:47.744 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:26:47.744 "name": "raid_bdev1", 00:26:47.744 "aliases": [ 00:26:47.744 "6f9debcd-1357-11ef-8e8f-9dd684e56d79" 00:26:47.744 ], 00:26:47.744 "product_name": "Raid Volume", 00:26:47.744 "block_size": 512, 00:26:47.744 "num_blocks": 63488, 00:26:47.744 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:47.744 "assigned_rate_limits": { 00:26:47.744 "rw_ios_per_sec": 0, 00:26:47.744 "rw_mbytes_per_sec": 0, 00:26:47.744 "r_mbytes_per_sec": 0, 00:26:47.744 "w_mbytes_per_sec": 0 00:26:47.744 }, 00:26:47.744 "claimed": false, 00:26:47.744 "zoned": false, 00:26:47.744 "supported_io_types": { 00:26:47.744 "read": true, 00:26:47.744 "write": true, 00:26:47.744 "unmap": false, 00:26:47.744 "write_zeroes": true, 00:26:47.744 "flush": false, 00:26:47.744 "reset": true, 00:26:47.744 "compare": false, 00:26:47.744 "compare_and_write": false, 00:26:47.744 "abort": false, 00:26:47.744 "nvme_admin": false, 00:26:47.744 "nvme_io": false 00:26:47.744 }, 00:26:47.744 "memory_domains": [ 00:26:47.744 { 00:26:47.744 "dma_device_id": "system", 00:26:47.744 "dma_device_type": 1 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.744 "dma_device_type": 2 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "system", 00:26:47.744 "dma_device_type": 1 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.744 "dma_device_type": 2 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "system", 00:26:47.744 "dma_device_type": 1 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.744 "dma_device_type": 2 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "system", 00:26:47.744 "dma_device_type": 1 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.744 "dma_device_type": 2 00:26:47.744 } 00:26:47.744 ], 00:26:47.744 "driver_specific": { 00:26:47.744 "raid": { 00:26:47.744 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:47.744 "strip_size_kb": 0, 00:26:47.744 "state": "online", 00:26:47.744 "raid_level": "raid1", 00:26:47.744 "superblock": true, 00:26:47.744 "num_base_bdevs": 4, 00:26:47.744 "num_base_bdevs_discovered": 4, 00:26:47.744 "num_base_bdevs_operational": 4, 00:26:47.744 "base_bdevs_list": [ 00:26:47.744 { 00:26:47.744 "name": "pt1", 00:26:47.744 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:47.744 "is_configured": true, 00:26:47.744 "data_offset": 2048, 00:26:47.744 "data_size": 63488 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "name": "pt2", 00:26:47.744 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:47.744 "is_configured": true, 00:26:47.744 "data_offset": 2048, 00:26:47.744 "data_size": 63488 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "name": "pt3", 00:26:47.744 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:47.744 "is_configured": true, 00:26:47.744 "data_offset": 2048, 00:26:47.744 "data_size": 63488 00:26:47.744 }, 00:26:47.744 { 00:26:47.744 "name": "pt4", 00:26:47.744 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:47.744 "is_configured": true, 00:26:47.744 "data_offset": 2048, 00:26:47.744 "data_size": 63488 00:26:47.744 } 00:26:47.744 ] 00:26:47.744 } 00:26:47.744 } 00:26:47.744 }' 00:26:47.744 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:47.744 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:26:47.744 pt2 00:26:47.744 pt3 00:26:47.744 pt4' 00:26:47.744 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:47.744 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:47.744 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:48.002 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:48.002 "name": "pt1", 00:26:48.002 "aliases": [ 00:26:48.002 "5a642c01-0990-af58-b4ce-e45021b92a83" 00:26:48.002 ], 00:26:48.002 "product_name": "passthru", 00:26:48.002 "block_size": 512, 00:26:48.002 "num_blocks": 65536, 00:26:48.002 "uuid": "5a642c01-0990-af58-b4ce-e45021b92a83", 00:26:48.002 "assigned_rate_limits": { 00:26:48.002 "rw_ios_per_sec": 0, 00:26:48.002 "rw_mbytes_per_sec": 0, 00:26:48.002 "r_mbytes_per_sec": 0, 00:26:48.002 "w_mbytes_per_sec": 0 00:26:48.002 }, 00:26:48.002 "claimed": true, 00:26:48.002 "claim_type": "exclusive_write", 00:26:48.002 "zoned": false, 00:26:48.002 "supported_io_types": { 00:26:48.002 "read": true, 00:26:48.002 "write": true, 00:26:48.002 "unmap": true, 00:26:48.002 "write_zeroes": true, 00:26:48.002 "flush": true, 00:26:48.002 "reset": true, 00:26:48.002 "compare": false, 00:26:48.002 "compare_and_write": false, 00:26:48.002 "abort": true, 00:26:48.002 "nvme_admin": false, 00:26:48.002 "nvme_io": false 00:26:48.002 }, 00:26:48.002 "memory_domains": [ 00:26:48.002 { 00:26:48.002 "dma_device_id": "system", 00:26:48.002 "dma_device_type": 1 00:26:48.002 }, 00:26:48.002 { 00:26:48.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.002 "dma_device_type": 2 00:26:48.002 } 00:26:48.002 ], 00:26:48.002 "driver_specific": { 00:26:48.002 "passthru": { 00:26:48.002 "name": "pt1", 00:26:48.003 "base_bdev_name": "malloc1" 00:26:48.003 } 00:26:48.003 } 00:26:48.003 }' 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:48.003 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:48.261 "name": "pt2", 00:26:48.261 "aliases": [ 00:26:48.261 "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6" 00:26:48.261 ], 00:26:48.261 "product_name": "passthru", 00:26:48.261 "block_size": 512, 00:26:48.261 "num_blocks": 65536, 00:26:48.261 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:48.261 "assigned_rate_limits": { 00:26:48.261 "rw_ios_per_sec": 0, 00:26:48.261 "rw_mbytes_per_sec": 0, 00:26:48.261 "r_mbytes_per_sec": 0, 00:26:48.261 "w_mbytes_per_sec": 0 00:26:48.261 }, 00:26:48.261 "claimed": true, 00:26:48.261 "claim_type": "exclusive_write", 00:26:48.261 "zoned": false, 00:26:48.261 "supported_io_types": { 00:26:48.261 "read": true, 00:26:48.261 "write": true, 00:26:48.261 "unmap": true, 00:26:48.261 "write_zeroes": true, 00:26:48.261 "flush": true, 00:26:48.261 "reset": true, 00:26:48.261 "compare": false, 00:26:48.261 "compare_and_write": false, 00:26:48.261 "abort": true, 00:26:48.261 "nvme_admin": false, 00:26:48.261 "nvme_io": false 00:26:48.261 }, 00:26:48.261 "memory_domains": [ 00:26:48.261 { 00:26:48.261 "dma_device_id": "system", 00:26:48.261 "dma_device_type": 1 00:26:48.261 }, 00:26:48.261 { 00:26:48.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.261 "dma_device_type": 2 00:26:48.261 } 00:26:48.261 ], 00:26:48.261 "driver_specific": { 00:26:48.261 "passthru": { 00:26:48.261 "name": "pt2", 00:26:48.261 "base_bdev_name": "malloc2" 00:26:48.261 } 00:26:48.261 } 00:26:48.261 }' 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:48.261 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:48.519 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:48.519 "name": "pt3", 00:26:48.519 "aliases": [ 00:26:48.519 "7c02a9b5-56eb-405c-ab55-6113bb6c387f" 00:26:48.519 ], 00:26:48.519 "product_name": "passthru", 00:26:48.519 "block_size": 512, 00:26:48.519 "num_blocks": 65536, 00:26:48.519 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:48.519 "assigned_rate_limits": { 00:26:48.519 "rw_ios_per_sec": 0, 00:26:48.519 "rw_mbytes_per_sec": 0, 00:26:48.519 "r_mbytes_per_sec": 0, 00:26:48.519 "w_mbytes_per_sec": 0 00:26:48.519 }, 00:26:48.520 "claimed": true, 00:26:48.520 "claim_type": "exclusive_write", 00:26:48.520 "zoned": false, 00:26:48.520 "supported_io_types": { 00:26:48.520 "read": true, 00:26:48.520 "write": true, 00:26:48.520 "unmap": true, 00:26:48.520 "write_zeroes": true, 00:26:48.520 "flush": true, 00:26:48.520 "reset": true, 00:26:48.520 "compare": false, 00:26:48.520 "compare_and_write": false, 00:26:48.520 "abort": true, 00:26:48.520 "nvme_admin": false, 00:26:48.520 "nvme_io": false 00:26:48.520 }, 00:26:48.520 "memory_domains": [ 00:26:48.520 { 00:26:48.520 "dma_device_id": "system", 00:26:48.520 "dma_device_type": 1 00:26:48.520 }, 00:26:48.520 { 00:26:48.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.520 "dma_device_type": 2 00:26:48.520 } 00:26:48.520 ], 00:26:48.520 "driver_specific": { 00:26:48.520 "passthru": { 00:26:48.520 "name": "pt3", 00:26:48.520 "base_bdev_name": "malloc3" 00:26:48.520 } 00:26:48.520 } 00:26:48.520 }' 00:26:48.520 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:48.520 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:48.520 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:48.520 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:48.520 07:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:48.520 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:26:49.085 "name": "pt4", 00:26:49.085 "aliases": [ 00:26:49.085 "7506cacc-77e3-4150-a909-7afe04ed0217" 00:26:49.085 ], 00:26:49.085 "product_name": "passthru", 00:26:49.085 "block_size": 512, 00:26:49.085 "num_blocks": 65536, 00:26:49.085 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:49.085 "assigned_rate_limits": { 00:26:49.085 "rw_ios_per_sec": 0, 00:26:49.085 "rw_mbytes_per_sec": 0, 00:26:49.085 "r_mbytes_per_sec": 0, 00:26:49.085 "w_mbytes_per_sec": 0 00:26:49.085 }, 00:26:49.085 "claimed": true, 00:26:49.085 "claim_type": "exclusive_write", 00:26:49.085 "zoned": false, 00:26:49.085 "supported_io_types": { 00:26:49.085 "read": true, 00:26:49.085 "write": true, 00:26:49.085 "unmap": true, 00:26:49.085 "write_zeroes": true, 00:26:49.085 "flush": true, 00:26:49.085 "reset": true, 00:26:49.085 "compare": false, 00:26:49.085 "compare_and_write": false, 00:26:49.085 "abort": true, 00:26:49.085 "nvme_admin": false, 00:26:49.085 "nvme_io": false 00:26:49.085 }, 00:26:49.085 "memory_domains": [ 00:26:49.085 { 00:26:49.085 "dma_device_id": "system", 00:26:49.085 "dma_device_type": 1 00:26:49.085 }, 00:26:49.085 { 00:26:49.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.085 "dma_device_type": 2 00:26:49.085 } 00:26:49.085 ], 00:26:49.085 "driver_specific": { 00:26:49.085 "passthru": { 00:26:49.085 "name": "pt4", 00:26:49.085 "base_bdev_name": "malloc4" 00:26:49.085 } 00:26:49.085 } 00:26:49.085 }' 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ 512 == 512 ]] 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:49.085 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:49.343 [2024-05-16 07:39:42.655893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6f9debcd-1357-11ef-8e8f-9dd684e56d79 '!=' 6f9debcd-1357-11ef-8e8f-9dd684e56d79 ']' 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # case $1 in 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 0 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:49.343 [2024-05-16 07:39:42.847869] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.343 07:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.600 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:49.600 "name": "raid_bdev1", 00:26:49.600 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:49.600 "strip_size_kb": 0, 00:26:49.600 "state": "online", 00:26:49.600 "raid_level": "raid1", 00:26:49.600 "superblock": true, 00:26:49.600 "num_base_bdevs": 4, 00:26:49.600 "num_base_bdevs_discovered": 3, 00:26:49.600 "num_base_bdevs_operational": 3, 00:26:49.600 "base_bdevs_list": [ 00:26:49.600 { 00:26:49.600 "name": null, 00:26:49.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.600 "is_configured": false, 00:26:49.600 "data_offset": 2048, 00:26:49.600 "data_size": 63488 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "name": "pt2", 00:26:49.600 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 2048, 00:26:49.600 "data_size": 63488 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "name": "pt3", 00:26:49.600 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 2048, 00:26:49.600 "data_size": 63488 00:26:49.600 }, 00:26:49.600 { 00:26:49.600 "name": "pt4", 00:26:49.600 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:49.600 "is_configured": true, 00:26:49.600 "data_offset": 2048, 00:26:49.600 "data_size": 63488 00:26:49.600 } 00:26:49.600 ] 00:26:49.600 }' 00:26:49.600 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:49.600 07:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.860 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:50.118 [2024-05-16 07:39:43.651882] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:50.118 [2024-05-16 07:39:43.651903] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:50.118 [2024-05-16 07:39:43.651914] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:50.118 [2024-05-16 07:39:43.651932] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:50.118 [2024-05-16 07:39:43.651936] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c3d0c80 name raid_bdev1, state offline 00:26:50.118 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.375 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:50.633 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:50.633 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:50.633 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:50.633 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:50.633 07:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:50.891 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:50.891 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:50.891 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:51.150 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:51.150 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:51.150 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:51.408 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:51.408 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:51.408 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:51.408 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:51.408 07:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:51.667 [2024-05-16 07:39:44.999900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:51.667 [2024-05-16 07:39:44.999950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.667 [2024-05-16 07:39:44.999973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1900 00:26:51.667 [2024-05-16 07:39:44.999980] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.667 [2024-05-16 07:39:45.000500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.667 [2024-05-16 07:39:45.000528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:51.667 [2024-05-16 07:39:45.000547] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:51.667 [2024-05-16 07:39:45.000558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:51.667 pt2 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.667 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.925 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.925 "name": "raid_bdev1", 00:26:51.925 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:51.925 "strip_size_kb": 0, 00:26:51.925 "state": "configuring", 00:26:51.925 "raid_level": "raid1", 00:26:51.925 "superblock": true, 00:26:51.925 "num_base_bdevs": 4, 00:26:51.925 "num_base_bdevs_discovered": 1, 00:26:51.925 "num_base_bdevs_operational": 3, 00:26:51.925 "base_bdevs_list": [ 00:26:51.925 { 00:26:51.925 "name": null, 00:26:51.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.925 "is_configured": false, 00:26:51.925 "data_offset": 2048, 00:26:51.925 "data_size": 63488 00:26:51.925 }, 00:26:51.925 { 00:26:51.925 "name": "pt2", 00:26:51.926 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:51.926 "is_configured": true, 00:26:51.926 "data_offset": 2048, 00:26:51.926 "data_size": 63488 00:26:51.926 }, 00:26:51.926 { 00:26:51.926 "name": null, 00:26:51.926 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:51.926 "is_configured": false, 00:26:51.926 "data_offset": 2048, 00:26:51.926 "data_size": 63488 00:26:51.926 }, 00:26:51.926 { 00:26:51.926 "name": null, 00:26:51.926 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:51.926 "is_configured": false, 00:26:51.926 "data_offset": 2048, 00:26:51.926 "data_size": 63488 00:26:51.926 } 00:26:51.926 ] 00:26:51.926 }' 00:26:51.926 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.926 07:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.184 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:52.184 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:52.184 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:52.443 [2024-05-16 07:39:45.879931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:52.443 [2024-05-16 07:39:45.879984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.443 [2024-05-16 07:39:45.880016] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1680 00:26:52.443 [2024-05-16 07:39:45.880031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.443 [2024-05-16 07:39:45.880178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.443 [2024-05-16 07:39:45.880194] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:52.443 [2024-05-16 07:39:45.880228] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:52.443 [2024-05-16 07:39:45.880243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:52.443 pt3 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.443 07:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.702 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:52.702 "name": "raid_bdev1", 00:26:52.702 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:52.702 "strip_size_kb": 0, 00:26:52.702 "state": "configuring", 00:26:52.702 "raid_level": "raid1", 00:26:52.702 "superblock": true, 00:26:52.702 "num_base_bdevs": 4, 00:26:52.702 "num_base_bdevs_discovered": 2, 00:26:52.702 "num_base_bdevs_operational": 3, 00:26:52.702 "base_bdevs_list": [ 00:26:52.702 { 00:26:52.702 "name": null, 00:26:52.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.702 "is_configured": false, 00:26:52.702 "data_offset": 2048, 00:26:52.702 "data_size": 63488 00:26:52.702 }, 00:26:52.702 { 00:26:52.702 "name": "pt2", 00:26:52.702 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:52.702 "is_configured": true, 00:26:52.702 "data_offset": 2048, 00:26:52.702 "data_size": 63488 00:26:52.702 }, 00:26:52.702 { 00:26:52.702 "name": "pt3", 00:26:52.702 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:52.702 "is_configured": true, 00:26:52.702 "data_offset": 2048, 00:26:52.702 "data_size": 63488 00:26:52.702 }, 00:26:52.702 { 00:26:52.702 "name": null, 00:26:52.702 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:52.702 "is_configured": false, 00:26:52.702 "data_offset": 2048, 00:26:52.702 "data_size": 63488 00:26:52.702 } 00:26:52.702 ] 00:26:52.702 }' 00:26:52.702 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:52.702 07:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.961 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:52.961 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:52.961 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:26:52.961 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:53.273 [2024-05-16 07:39:46.751956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:53.273 [2024-05-16 07:39:46.752010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:53.273 [2024-05-16 07:39:46.752036] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d0c80 00:26:53.273 [2024-05-16 07:39:46.752044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:53.273 [2024-05-16 07:39:46.752164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:53.273 [2024-05-16 07:39:46.752173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:53.273 [2024-05-16 07:39:46.752193] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:53.273 [2024-05-16 07:39:46.752202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:53.273 [2024-05-16 07:39:46.752228] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c3d0780 00:26:53.273 [2024-05-16 07:39:46.752232] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:53.273 [2024-05-16 07:39:46.752250] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c433e20 00:26:53.273 [2024-05-16 07:39:46.752301] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c3d0780 00:26:53.273 [2024-05-16 07:39:46.752305] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c3d0780 00:26:53.273 [2024-05-16 07:39:46.752321] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.273 pt4 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.273 07:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.531 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:53.531 "name": "raid_bdev1", 00:26:53.531 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:53.531 "strip_size_kb": 0, 00:26:53.531 "state": "online", 00:26:53.531 "raid_level": "raid1", 00:26:53.531 "superblock": true, 00:26:53.531 "num_base_bdevs": 4, 00:26:53.531 "num_base_bdevs_discovered": 3, 00:26:53.531 "num_base_bdevs_operational": 3, 00:26:53.531 "base_bdevs_list": [ 00:26:53.531 { 00:26:53.531 "name": null, 00:26:53.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.531 "is_configured": false, 00:26:53.531 "data_offset": 2048, 00:26:53.531 "data_size": 63488 00:26:53.531 }, 00:26:53.531 { 00:26:53.531 "name": "pt2", 00:26:53.531 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:53.531 "is_configured": true, 00:26:53.531 "data_offset": 2048, 00:26:53.531 "data_size": 63488 00:26:53.531 }, 00:26:53.531 { 00:26:53.531 "name": "pt3", 00:26:53.531 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:53.531 "is_configured": true, 00:26:53.531 "data_offset": 2048, 00:26:53.531 "data_size": 63488 00:26:53.531 }, 00:26:53.531 { 00:26:53.531 "name": "pt4", 00:26:53.531 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:53.531 "is_configured": true, 00:26:53.531 "data_offset": 2048, 00:26:53.531 "data_size": 63488 00:26:53.531 } 00:26:53.532 ] 00:26:53.532 }' 00:26:53.532 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:53.532 07:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.791 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:54.049 [2024-05-16 07:39:47.523959] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:54.050 [2024-05-16 07:39:47.523999] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:54.050 [2024-05-16 07:39:47.524030] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.050 [2024-05-16 07:39:47.524060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.050 [2024-05-16 07:39:47.524070] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c3d0780 name raid_bdev1, state offline 00:26:54.050 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.050 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:54.307 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:54.307 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:54.307 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:26:54.307 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:54.307 07:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:54.566 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:54.824 [2024-05-16 07:39:48.251968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:54.824 [2024-05-16 07:39:48.252024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.824 [2024-05-16 07:39:48.252050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d0c80 00:26:54.824 [2024-05-16 07:39:48.252066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.824 [2024-05-16 07:39:48.252577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.824 [2024-05-16 07:39:48.252608] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:54.824 [2024-05-16 07:39:48.252630] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:54.824 [2024-05-16 07:39:48.252640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:54.824 [2024-05-16 07:39:48.252665] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:54.824 [2024-05-16 07:39:48.252669] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:54.824 [2024-05-16 07:39:48.252673] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c3d0780 name raid_bdev1, state configuring 00:26:54.824 [2024-05-16 07:39:48.252680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:54.824 [2024-05-16 07:39:48.252696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:54.824 pt1 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.824 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.083 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:55.083 "name": "raid_bdev1", 00:26:55.083 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:55.083 "strip_size_kb": 0, 00:26:55.083 "state": "configuring", 00:26:55.083 "raid_level": "raid1", 00:26:55.083 "superblock": true, 00:26:55.083 "num_base_bdevs": 4, 00:26:55.083 "num_base_bdevs_discovered": 2, 00:26:55.083 "num_base_bdevs_operational": 3, 00:26:55.083 "base_bdevs_list": [ 00:26:55.083 { 00:26:55.083 "name": null, 00:26:55.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.083 "is_configured": false, 00:26:55.083 "data_offset": 2048, 00:26:55.083 "data_size": 63488 00:26:55.083 }, 00:26:55.083 { 00:26:55.083 "name": "pt2", 00:26:55.083 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:55.083 "is_configured": true, 00:26:55.083 "data_offset": 2048, 00:26:55.083 "data_size": 63488 00:26:55.083 }, 00:26:55.083 { 00:26:55.083 "name": "pt3", 00:26:55.083 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:55.083 "is_configured": true, 00:26:55.083 "data_offset": 2048, 00:26:55.083 "data_size": 63488 00:26:55.083 }, 00:26:55.083 { 00:26:55.083 "name": null, 00:26:55.083 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:55.083 "is_configured": false, 00:26:55.083 "data_offset": 2048, 00:26:55.083 "data_size": 63488 00:26:55.083 } 00:26:55.083 ] 00:26:55.083 }' 00:26:55.083 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:55.083 07:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.651 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:55.651 07:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:55.651 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:55.651 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:55.910 [2024-05-16 07:39:49.451980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:55.910 [2024-05-16 07:39:49.452044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.910 [2024-05-16 07:39:49.452087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c3d1180 00:26:55.910 [2024-05-16 07:39:49.452095] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.910 [2024-05-16 07:39:49.452203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.910 [2024-05-16 07:39:49.452211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:55.910 [2024-05-16 07:39:49.452248] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:55.910 [2024-05-16 07:39:49.452256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:55.910 [2024-05-16 07:39:49.452281] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c3d0780 00:26:55.910 [2024-05-16 07:39:49.452285] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:55.910 [2024-05-16 07:39:49.452304] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c433e20 00:26:55.910 [2024-05-16 07:39:49.452338] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c3d0780 00:26:55.910 [2024-05-16 07:39:49.452342] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c3d0780 00:26:55.910 [2024-05-16 07:39:49.452359] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.910 pt4 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.169 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.427 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:56.427 "name": "raid_bdev1", 00:26:56.427 "uuid": "6f9debcd-1357-11ef-8e8f-9dd684e56d79", 00:26:56.427 "strip_size_kb": 0, 00:26:56.427 "state": "online", 00:26:56.427 "raid_level": "raid1", 00:26:56.427 "superblock": true, 00:26:56.427 "num_base_bdevs": 4, 00:26:56.427 "num_base_bdevs_discovered": 3, 00:26:56.427 "num_base_bdevs_operational": 3, 00:26:56.427 "base_bdevs_list": [ 00:26:56.427 { 00:26:56.427 "name": null, 00:26:56.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.427 "is_configured": false, 00:26:56.427 "data_offset": 2048, 00:26:56.427 "data_size": 63488 00:26:56.427 }, 00:26:56.427 { 00:26:56.427 "name": "pt2", 00:26:56.427 "uuid": "bdddd8a3-c866-8e50-a8cc-ea39ddf512c6", 00:26:56.428 "is_configured": true, 00:26:56.428 "data_offset": 2048, 00:26:56.428 "data_size": 63488 00:26:56.428 }, 00:26:56.428 { 00:26:56.428 "name": "pt3", 00:26:56.428 "uuid": "7c02a9b5-56eb-405c-ab55-6113bb6c387f", 00:26:56.428 "is_configured": true, 00:26:56.428 "data_offset": 2048, 00:26:56.428 "data_size": 63488 00:26:56.428 }, 00:26:56.428 { 00:26:56.428 "name": "pt4", 00:26:56.428 "uuid": "7506cacc-77e3-4150-a909-7afe04ed0217", 00:26:56.428 "is_configured": true, 00:26:56.428 "data_offset": 2048, 00:26:56.428 "data_size": 63488 00:26:56.428 } 00:26:56.428 ] 00:26:56.428 }' 00:26:56.428 07:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:56.428 07:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.686 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:56.686 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:56.686 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:56.686 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:56.686 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:57.272 [2024-05-16 07:39:50.508100] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6f9debcd-1357-11ef-8e8f-9dd684e56d79 '!=' 6f9debcd-1357-11ef-8e8f-9dd684e56d79 ']' 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63327 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 63327 ']' 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 63327 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps -c -o command 63327 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # tail -1 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:26:57.272 killing process with pid 63327 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63327' 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 63327 00:26:57.272 [2024-05-16 07:39:50.540099] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:57.272 [2024-05-16 07:39:50.540137] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:57.272 [2024-05-16 07:39:50.540156] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:57.272 [2024-05-16 07:39:50.540161] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c3d0780 name raid_bdev1, state offline 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 63327 00:26:57.272 [2024-05-16 07:39:50.559260] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:57.272 00:26:57.272 real 0m21.411s 00:26:57.272 user 0m39.207s 00:26:57.272 sys 0m2.865s 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:57.272 07:39:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.272 ************************************ 00:26:57.272 END TEST raid_superblock_test 00:26:57.272 ************************************ 00:26:57.272 07:39:50 bdev_raid -- bdev/bdev_raid.sh@809 -- # '[' '' = true ']' 00:26:57.272 07:39:50 bdev_raid -- bdev/bdev_raid.sh@818 -- # '[' n == y ']' 00:26:57.272 07:39:50 bdev_raid -- bdev/bdev_raid.sh@830 -- # base_blocklen=4096 00:26:57.272 07:39:50 bdev_raid -- bdev/bdev_raid.sh@832 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:26:57.272 07:39:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:57.272 07:39:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:57.272 07:39:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:57.272 ************************************ 00:26:57.272 START TEST raid_state_function_test_sb_4k 00:26:57.272 ************************************ 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # raid_pid=63961 00:26:57.272 Process raid pid: 63961 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 63961' 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@247 -- # waitforlisten 63961 /var/tmp/spdk-raid.sock 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 63961 ']' 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:57.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:57.272 07:39:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:57.272 [2024-05-16 07:39:50.790591] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:57.272 [2024-05-16 07:39:50.790764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:26:57.870 EAL: TSC is not safe to use in SMP mode 00:26:57.870 EAL: TSC is not invariant 00:26:57.870 [2024-05-16 07:39:51.277561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.870 [2024-05-16 07:39:51.360110] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:26:57.870 [2024-05-16 07:39:51.362238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.870 [2024-05-16 07:39:51.362957] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.870 [2024-05-16 07:39:51.362969] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:58.437 07:39:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:58.437 07:39:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:26:58.437 07:39:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:58.695 [2024-05-16 07:39:52.093532] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:58.695 [2024-05-16 07:39:52.093585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:58.695 [2024-05-16 07:39:52.093591] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:58.695 [2024-05-16 07:39:52.093599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.695 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.953 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:58.953 "name": "Existed_Raid", 00:26:58.953 "uuid": "7b25d95b-1357-11ef-8e8f-9dd684e56d79", 00:26:58.953 "strip_size_kb": 0, 00:26:58.953 "state": "configuring", 00:26:58.953 "raid_level": "raid1", 00:26:58.953 "superblock": true, 00:26:58.953 "num_base_bdevs": 2, 00:26:58.953 "num_base_bdevs_discovered": 0, 00:26:58.953 "num_base_bdevs_operational": 2, 00:26:58.953 "base_bdevs_list": [ 00:26:58.953 { 00:26:58.953 "name": "BaseBdev1", 00:26:58.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.953 "is_configured": false, 00:26:58.953 "data_offset": 0, 00:26:58.953 "data_size": 0 00:26:58.953 }, 00:26:58.953 { 00:26:58.953 "name": "BaseBdev2", 00:26:58.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.953 "is_configured": false, 00:26:58.953 "data_offset": 0, 00:26:58.953 "data_size": 0 00:26:58.953 } 00:26:58.953 ] 00:26:58.953 }' 00:26:58.953 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:58.953 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:59.520 07:39:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:59.520 [2024-05-16 07:39:53.073500] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:59.520 [2024-05-16 07:39:53.073526] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c451500 name Existed_Raid, state configuring 00:26:59.779 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:00.037 [2024-05-16 07:39:53.345505] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:00.037 [2024-05-16 07:39:53.345552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:00.037 [2024-05-16 07:39:53.345556] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:00.037 [2024-05-16 07:39:53.345564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:27:00.037 [2024-05-16 07:39:53.570416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:00.037 BaseBdev1 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:00.037 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:00.665 07:39:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:00.665 [ 00:27:00.665 { 00:27:00.665 "name": "BaseBdev1", 00:27:00.665 "aliases": [ 00:27:00.665 "7c07108f-1357-11ef-8e8f-9dd684e56d79" 00:27:00.665 ], 00:27:00.665 "product_name": "Malloc disk", 00:27:00.665 "block_size": 4096, 00:27:00.665 "num_blocks": 8192, 00:27:00.666 "uuid": "7c07108f-1357-11ef-8e8f-9dd684e56d79", 00:27:00.666 "assigned_rate_limits": { 00:27:00.666 "rw_ios_per_sec": 0, 00:27:00.666 "rw_mbytes_per_sec": 0, 00:27:00.666 "r_mbytes_per_sec": 0, 00:27:00.666 "w_mbytes_per_sec": 0 00:27:00.666 }, 00:27:00.666 "claimed": true, 00:27:00.666 "claim_type": "exclusive_write", 00:27:00.666 "zoned": false, 00:27:00.666 "supported_io_types": { 00:27:00.666 "read": true, 00:27:00.666 "write": true, 00:27:00.666 "unmap": true, 00:27:00.666 "write_zeroes": true, 00:27:00.666 "flush": true, 00:27:00.666 "reset": true, 00:27:00.666 "compare": false, 00:27:00.666 "compare_and_write": false, 00:27:00.666 "abort": true, 00:27:00.666 "nvme_admin": false, 00:27:00.666 "nvme_io": false 00:27:00.666 }, 00:27:00.666 "memory_domains": [ 00:27:00.666 { 00:27:00.666 "dma_device_id": "system", 00:27:00.666 "dma_device_type": 1 00:27:00.666 }, 00:27:00.666 { 00:27:00.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.666 "dma_device_type": 2 00:27:00.666 } 00:27:00.666 ], 00:27:00.666 "driver_specific": {} 00:27:00.666 } 00:27:00.666 ] 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.666 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.939 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:00.939 "name": "Existed_Raid", 00:27:00.939 "uuid": "7be4e28d-1357-11ef-8e8f-9dd684e56d79", 00:27:00.939 "strip_size_kb": 0, 00:27:00.939 "state": "configuring", 00:27:00.939 "raid_level": "raid1", 00:27:00.939 "superblock": true, 00:27:00.939 "num_base_bdevs": 2, 00:27:00.939 "num_base_bdevs_discovered": 1, 00:27:00.939 "num_base_bdevs_operational": 2, 00:27:00.939 "base_bdevs_list": [ 00:27:00.939 { 00:27:00.939 "name": "BaseBdev1", 00:27:00.939 "uuid": "7c07108f-1357-11ef-8e8f-9dd684e56d79", 00:27:00.939 "is_configured": true, 00:27:00.939 "data_offset": 256, 00:27:00.939 "data_size": 7936 00:27:00.939 }, 00:27:00.939 { 00:27:00.939 "name": "BaseBdev2", 00:27:00.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.939 "is_configured": false, 00:27:00.939 "data_offset": 0, 00:27:00.939 "data_size": 0 00:27:00.939 } 00:27:00.939 ] 00:27:00.940 }' 00:27:00.940 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:00.940 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:01.509 07:39:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:01.509 [2024-05-16 07:39:54.989516] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:01.509 [2024-05-16 07:39:54.989547] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c451500 name Existed_Raid, state configuring 00:27:01.509 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:01.768 [2024-05-16 07:39:55.261542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:01.768 [2024-05-16 07:39:55.262249] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:01.768 [2024-05-16 07:39:55.262286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.768 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.027 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:02.027 "name": "Existed_Raid", 00:27:02.027 "uuid": "7d093f9f-1357-11ef-8e8f-9dd684e56d79", 00:27:02.027 "strip_size_kb": 0, 00:27:02.027 "state": "configuring", 00:27:02.027 "raid_level": "raid1", 00:27:02.027 "superblock": true, 00:27:02.027 "num_base_bdevs": 2, 00:27:02.027 "num_base_bdevs_discovered": 1, 00:27:02.027 "num_base_bdevs_operational": 2, 00:27:02.027 "base_bdevs_list": [ 00:27:02.027 { 00:27:02.027 "name": "BaseBdev1", 00:27:02.027 "uuid": "7c07108f-1357-11ef-8e8f-9dd684e56d79", 00:27:02.027 "is_configured": true, 00:27:02.027 "data_offset": 256, 00:27:02.027 "data_size": 7936 00:27:02.027 }, 00:27:02.027 { 00:27:02.027 "name": "BaseBdev2", 00:27:02.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.027 "is_configured": false, 00:27:02.027 "data_offset": 0, 00:27:02.027 "data_size": 0 00:27:02.027 } 00:27:02.027 ] 00:27:02.027 }' 00:27:02.027 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:02.027 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.593 07:39:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:27:02.851 [2024-05-16 07:39:56.249695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:02.851 [2024-05-16 07:39:56.249749] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c451a00 00:27:02.851 [2024-05-16 07:39:56.249753] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:02.851 [2024-05-16 07:39:56.249770] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c4b4ec0 00:27:02.851 [2024-05-16 07:39:56.249801] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c451a00 00:27:02.851 [2024-05-16 07:39:56.249804] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c451a00 00:27:02.851 [2024-05-16 07:39:56.249818] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.851 BaseBdev2 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:02.851 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:03.108 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:03.366 [ 00:27:03.366 { 00:27:03.366 "name": "BaseBdev2", 00:27:03.366 "aliases": [ 00:27:03.366 "7da00396-1357-11ef-8e8f-9dd684e56d79" 00:27:03.366 ], 00:27:03.366 "product_name": "Malloc disk", 00:27:03.366 "block_size": 4096, 00:27:03.366 "num_blocks": 8192, 00:27:03.366 "uuid": "7da00396-1357-11ef-8e8f-9dd684e56d79", 00:27:03.366 "assigned_rate_limits": { 00:27:03.366 "rw_ios_per_sec": 0, 00:27:03.366 "rw_mbytes_per_sec": 0, 00:27:03.366 "r_mbytes_per_sec": 0, 00:27:03.366 "w_mbytes_per_sec": 0 00:27:03.366 }, 00:27:03.366 "claimed": true, 00:27:03.366 "claim_type": "exclusive_write", 00:27:03.366 "zoned": false, 00:27:03.366 "supported_io_types": { 00:27:03.366 "read": true, 00:27:03.366 "write": true, 00:27:03.366 "unmap": true, 00:27:03.366 "write_zeroes": true, 00:27:03.366 "flush": true, 00:27:03.366 "reset": true, 00:27:03.366 "compare": false, 00:27:03.367 "compare_and_write": false, 00:27:03.367 "abort": true, 00:27:03.367 "nvme_admin": false, 00:27:03.367 "nvme_io": false 00:27:03.367 }, 00:27:03.367 "memory_domains": [ 00:27:03.367 { 00:27:03.367 "dma_device_id": "system", 00:27:03.367 "dma_device_type": 1 00:27:03.367 }, 00:27:03.367 { 00:27:03.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.367 "dma_device_type": 2 00:27:03.367 } 00:27:03.367 ], 00:27:03.367 "driver_specific": {} 00:27:03.367 } 00:27:03.367 ] 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.367 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.625 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:03.625 "name": "Existed_Raid", 00:27:03.625 "uuid": "7d093f9f-1357-11ef-8e8f-9dd684e56d79", 00:27:03.625 "strip_size_kb": 0, 00:27:03.625 "state": "online", 00:27:03.625 "raid_level": "raid1", 00:27:03.625 "superblock": true, 00:27:03.625 "num_base_bdevs": 2, 00:27:03.625 "num_base_bdevs_discovered": 2, 00:27:03.625 "num_base_bdevs_operational": 2, 00:27:03.625 "base_bdevs_list": [ 00:27:03.625 { 00:27:03.625 "name": "BaseBdev1", 00:27:03.625 "uuid": "7c07108f-1357-11ef-8e8f-9dd684e56d79", 00:27:03.625 "is_configured": true, 00:27:03.625 "data_offset": 256, 00:27:03.625 "data_size": 7936 00:27:03.625 }, 00:27:03.625 { 00:27:03.625 "name": "BaseBdev2", 00:27:03.625 "uuid": "7da00396-1357-11ef-8e8f-9dd684e56d79", 00:27:03.625 "is_configured": true, 00:27:03.625 "data_offset": 256, 00:27:03.625 "data_size": 7936 00:27:03.625 } 00:27:03.625 ] 00:27:03.625 }' 00:27:03.625 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:03.625 07:39:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # local name 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:03.882 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:04.142 [2024-05-16 07:39:57.501666] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:04.142 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:04.142 "name": "Existed_Raid", 00:27:04.142 "aliases": [ 00:27:04.142 "7d093f9f-1357-11ef-8e8f-9dd684e56d79" 00:27:04.142 ], 00:27:04.142 "product_name": "Raid Volume", 00:27:04.142 "block_size": 4096, 00:27:04.142 "num_blocks": 7936, 00:27:04.142 "uuid": "7d093f9f-1357-11ef-8e8f-9dd684e56d79", 00:27:04.142 "assigned_rate_limits": { 00:27:04.142 "rw_ios_per_sec": 0, 00:27:04.142 "rw_mbytes_per_sec": 0, 00:27:04.142 "r_mbytes_per_sec": 0, 00:27:04.142 "w_mbytes_per_sec": 0 00:27:04.142 }, 00:27:04.142 "claimed": false, 00:27:04.142 "zoned": false, 00:27:04.142 "supported_io_types": { 00:27:04.142 "read": true, 00:27:04.142 "write": true, 00:27:04.142 "unmap": false, 00:27:04.142 "write_zeroes": true, 00:27:04.142 "flush": false, 00:27:04.142 "reset": true, 00:27:04.142 "compare": false, 00:27:04.142 "compare_and_write": false, 00:27:04.142 "abort": false, 00:27:04.142 "nvme_admin": false, 00:27:04.142 "nvme_io": false 00:27:04.142 }, 00:27:04.142 "memory_domains": [ 00:27:04.142 { 00:27:04.142 "dma_device_id": "system", 00:27:04.142 "dma_device_type": 1 00:27:04.142 }, 00:27:04.142 { 00:27:04.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.142 "dma_device_type": 2 00:27:04.142 }, 00:27:04.142 { 00:27:04.142 "dma_device_id": "system", 00:27:04.142 "dma_device_type": 1 00:27:04.142 }, 00:27:04.142 { 00:27:04.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.142 "dma_device_type": 2 00:27:04.142 } 00:27:04.142 ], 00:27:04.142 "driver_specific": { 00:27:04.142 "raid": { 00:27:04.142 "uuid": "7d093f9f-1357-11ef-8e8f-9dd684e56d79", 00:27:04.142 "strip_size_kb": 0, 00:27:04.142 "state": "online", 00:27:04.142 "raid_level": "raid1", 00:27:04.142 "superblock": true, 00:27:04.142 "num_base_bdevs": 2, 00:27:04.142 "num_base_bdevs_discovered": 2, 00:27:04.142 "num_base_bdevs_operational": 2, 00:27:04.142 "base_bdevs_list": [ 00:27:04.142 { 00:27:04.142 "name": "BaseBdev1", 00:27:04.142 "uuid": "7c07108f-1357-11ef-8e8f-9dd684e56d79", 00:27:04.142 "is_configured": true, 00:27:04.142 "data_offset": 256, 00:27:04.142 "data_size": 7936 00:27:04.142 }, 00:27:04.142 { 00:27:04.142 "name": "BaseBdev2", 00:27:04.142 "uuid": "7da00396-1357-11ef-8e8f-9dd684e56d79", 00:27:04.142 "is_configured": true, 00:27:04.142 "data_offset": 256, 00:27:04.142 "data_size": 7936 00:27:04.142 } 00:27:04.142 ] 00:27:04.142 } 00:27:04.142 } 00:27:04.142 }' 00:27:04.142 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:04.142 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:27:04.142 BaseBdev2' 00:27:04.142 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:04.142 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:04.142 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:04.400 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:04.400 "name": "BaseBdev1", 00:27:04.400 "aliases": [ 00:27:04.400 "7c07108f-1357-11ef-8e8f-9dd684e56d79" 00:27:04.400 ], 00:27:04.400 "product_name": "Malloc disk", 00:27:04.400 "block_size": 4096, 00:27:04.400 "num_blocks": 8192, 00:27:04.400 "uuid": "7c07108f-1357-11ef-8e8f-9dd684e56d79", 00:27:04.400 "assigned_rate_limits": { 00:27:04.400 "rw_ios_per_sec": 0, 00:27:04.400 "rw_mbytes_per_sec": 0, 00:27:04.400 "r_mbytes_per_sec": 0, 00:27:04.400 "w_mbytes_per_sec": 0 00:27:04.400 }, 00:27:04.400 "claimed": true, 00:27:04.400 "claim_type": "exclusive_write", 00:27:04.400 "zoned": false, 00:27:04.400 "supported_io_types": { 00:27:04.400 "read": true, 00:27:04.400 "write": true, 00:27:04.400 "unmap": true, 00:27:04.400 "write_zeroes": true, 00:27:04.400 "flush": true, 00:27:04.400 "reset": true, 00:27:04.400 "compare": false, 00:27:04.400 "compare_and_write": false, 00:27:04.400 "abort": true, 00:27:04.400 "nvme_admin": false, 00:27:04.400 "nvme_io": false 00:27:04.400 }, 00:27:04.400 "memory_domains": [ 00:27:04.400 { 00:27:04.400 "dma_device_id": "system", 00:27:04.400 "dma_device_type": 1 00:27:04.400 }, 00:27:04.400 { 00:27:04.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.400 "dma_device_type": 2 00:27:04.400 } 00:27:04.400 ], 00:27:04.400 "driver_specific": {} 00:27:04.400 }' 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:04.401 07:39:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:04.659 "name": "BaseBdev2", 00:27:04.659 "aliases": [ 00:27:04.659 "7da00396-1357-11ef-8e8f-9dd684e56d79" 00:27:04.659 ], 00:27:04.659 "product_name": "Malloc disk", 00:27:04.659 "block_size": 4096, 00:27:04.659 "num_blocks": 8192, 00:27:04.659 "uuid": "7da00396-1357-11ef-8e8f-9dd684e56d79", 00:27:04.659 "assigned_rate_limits": { 00:27:04.659 "rw_ios_per_sec": 0, 00:27:04.659 "rw_mbytes_per_sec": 0, 00:27:04.659 "r_mbytes_per_sec": 0, 00:27:04.659 "w_mbytes_per_sec": 0 00:27:04.659 }, 00:27:04.659 "claimed": true, 00:27:04.659 "claim_type": "exclusive_write", 00:27:04.659 "zoned": false, 00:27:04.659 "supported_io_types": { 00:27:04.659 "read": true, 00:27:04.659 "write": true, 00:27:04.659 "unmap": true, 00:27:04.659 "write_zeroes": true, 00:27:04.659 "flush": true, 00:27:04.659 "reset": true, 00:27:04.659 "compare": false, 00:27:04.659 "compare_and_write": false, 00:27:04.659 "abort": true, 00:27:04.659 "nvme_admin": false, 00:27:04.659 "nvme_io": false 00:27:04.659 }, 00:27:04.659 "memory_domains": [ 00:27:04.659 { 00:27:04.659 "dma_device_id": "system", 00:27:04.659 "dma_device_type": 1 00:27:04.659 }, 00:27:04.659 { 00:27:04.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.659 "dma_device_type": 2 00:27:04.659 } 00:27:04.659 ], 00:27:04.659 "driver_specific": {} 00:27:04.659 }' 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:04.659 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:04.917 [2024-05-16 07:39:58.309655] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # local expected_state 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.917 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.175 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:05.175 "name": "Existed_Raid", 00:27:05.175 "uuid": "7d093f9f-1357-11ef-8e8f-9dd684e56d79", 00:27:05.175 "strip_size_kb": 0, 00:27:05.175 "state": "online", 00:27:05.175 "raid_level": "raid1", 00:27:05.175 "superblock": true, 00:27:05.175 "num_base_bdevs": 2, 00:27:05.175 "num_base_bdevs_discovered": 1, 00:27:05.175 "num_base_bdevs_operational": 1, 00:27:05.175 "base_bdevs_list": [ 00:27:05.175 { 00:27:05.176 "name": null, 00:27:05.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.176 "is_configured": false, 00:27:05.176 "data_offset": 256, 00:27:05.176 "data_size": 7936 00:27:05.176 }, 00:27:05.176 { 00:27:05.176 "name": "BaseBdev2", 00:27:05.176 "uuid": "7da00396-1357-11ef-8e8f-9dd684e56d79", 00:27:05.176 "is_configured": true, 00:27:05.176 "data_offset": 256, 00:27:05.176 "data_size": 7936 00:27:05.176 } 00:27:05.176 ] 00:27:05.176 }' 00:27:05.176 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:05.176 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:05.433 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:05.433 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:05.433 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.433 07:39:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:05.692 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:05.692 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:05.692 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:05.950 [2024-05-16 07:39:59.314957] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:05.950 [2024-05-16 07:39:59.315005] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:05.950 [2024-05-16 07:39:59.324271] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.950 [2024-05-16 07:39:59.324285] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.950 [2024-05-16 07:39:59.324290] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c451a00 name Existed_Raid, state offline 00:27:05.950 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:05.950 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:05.950 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.950 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@342 -- # killprocess 63961 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 63961 ']' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 63961 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps -c -o command 63961 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # tail -1 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:27:06.209 killing process with pid 63961 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63961' 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 63961 00:27:06.209 [2024-05-16 07:39:59.570250] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:06.209 [2024-05-16 07:39:59.570294] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:06.209 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 63961 00:27:06.467 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@344 -- # return 0 00:27:06.467 00:27:06.467 real 0m9.063s 00:27:06.467 user 0m15.732s 00:27:06.467 sys 0m1.535s 00:27:06.467 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:06.467 07:39:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.467 ************************************ 00:27:06.467 END TEST raid_state_function_test_sb_4k 00:27:06.467 ************************************ 00:27:06.468 07:39:59 bdev_raid -- bdev/bdev_raid.sh@833 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:27:06.468 07:39:59 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:27:06.468 07:39:59 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:06.468 07:39:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:06.468 ************************************ 00:27:06.468 START TEST raid_superblock_test_4k 00:27:06.468 ************************************ 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=64235 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 64235 /var/tmp/spdk-raid.sock 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 64235 ']' 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.468 07:39:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.468 [2024-05-16 07:39:59.891894] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:06.468 [2024-05-16 07:39:59.892102] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:27:07.035 EAL: TSC is not safe to use in SMP mode 00:27:07.035 EAL: TSC is not invariant 00:27:07.035 [2024-05-16 07:40:00.394447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.035 [2024-05-16 07:40:00.489447] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:07.035 [2024-05-16 07:40:00.492083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.035 [2024-05-16 07:40:00.492969] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:07.035 [2024-05-16 07:40:00.492985] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:07.601 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:07.602 07:40:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:27:07.860 malloc1 00:27:07.860 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:07.860 [2024-05-16 07:40:01.393376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:07.860 [2024-05-16 07:40:01.393434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.860 [2024-05-16 07:40:01.394011] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a073780 00:27:07.860 [2024-05-16 07:40:01.394038] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.860 [2024-05-16 07:40:01.394785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.860 [2024-05-16 07:40:01.394818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:07.860 pt1 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:08.118 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:27:08.377 malloc2 00:27:08.377 07:40:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:08.635 [2024-05-16 07:40:01.993378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:08.635 [2024-05-16 07:40:01.993438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.635 [2024-05-16 07:40:01.993465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a073c80 00:27:08.635 [2024-05-16 07:40:01.993474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.635 [2024-05-16 07:40:01.994084] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.635 [2024-05-16 07:40:01.994113] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:08.635 pt2 00:27:08.635 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:08.635 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:08.635 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:08.892 [2024-05-16 07:40:02.213373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:08.892 [2024-05-16 07:40:02.213838] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:08.892 [2024-05-16 07:40:02.213889] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a073f00 00:27:08.892 [2024-05-16 07:40:02.213894] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:08.892 [2024-05-16 07:40:02.213927] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a0d6e20 00:27:08.892 [2024-05-16 07:40:02.213983] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a073f00 00:27:08.892 [2024-05-16 07:40:02.213987] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a073f00 00:27:08.892 [2024-05-16 07:40:02.214008] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.892 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.150 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:09.150 "name": "raid_bdev1", 00:27:09.150 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:09.150 "strip_size_kb": 0, 00:27:09.150 "state": "online", 00:27:09.150 "raid_level": "raid1", 00:27:09.150 "superblock": true, 00:27:09.150 "num_base_bdevs": 2, 00:27:09.150 "num_base_bdevs_discovered": 2, 00:27:09.150 "num_base_bdevs_operational": 2, 00:27:09.150 "base_bdevs_list": [ 00:27:09.150 { 00:27:09.150 "name": "pt1", 00:27:09.150 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:09.150 "is_configured": true, 00:27:09.150 "data_offset": 256, 00:27:09.150 "data_size": 7936 00:27:09.150 }, 00:27:09.150 { 00:27:09.150 "name": "pt2", 00:27:09.150 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:09.150 "is_configured": true, 00:27:09.150 "data_offset": 256, 00:27:09.150 "data_size": 7936 00:27:09.150 } 00:27:09.150 ] 00:27:09.150 }' 00:27:09.150 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:09.150 07:40:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:09.407 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:09.408 07:40:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.666 [2024-05-16 07:40:02.985394] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.666 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:09.666 "name": "raid_bdev1", 00:27:09.666 "aliases": [ 00:27:09.666 "812e0387-1357-11ef-8e8f-9dd684e56d79" 00:27:09.666 ], 00:27:09.666 "product_name": "Raid Volume", 00:27:09.666 "block_size": 4096, 00:27:09.666 "num_blocks": 7936, 00:27:09.666 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:09.666 "assigned_rate_limits": { 00:27:09.666 "rw_ios_per_sec": 0, 00:27:09.666 "rw_mbytes_per_sec": 0, 00:27:09.666 "r_mbytes_per_sec": 0, 00:27:09.666 "w_mbytes_per_sec": 0 00:27:09.666 }, 00:27:09.666 "claimed": false, 00:27:09.666 "zoned": false, 00:27:09.666 "supported_io_types": { 00:27:09.666 "read": true, 00:27:09.666 "write": true, 00:27:09.666 "unmap": false, 00:27:09.666 "write_zeroes": true, 00:27:09.666 "flush": false, 00:27:09.666 "reset": true, 00:27:09.666 "compare": false, 00:27:09.666 "compare_and_write": false, 00:27:09.666 "abort": false, 00:27:09.666 "nvme_admin": false, 00:27:09.666 "nvme_io": false 00:27:09.666 }, 00:27:09.666 "memory_domains": [ 00:27:09.666 { 00:27:09.666 "dma_device_id": "system", 00:27:09.666 "dma_device_type": 1 00:27:09.666 }, 00:27:09.666 { 00:27:09.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.666 "dma_device_type": 2 00:27:09.666 }, 00:27:09.666 { 00:27:09.666 "dma_device_id": "system", 00:27:09.666 "dma_device_type": 1 00:27:09.666 }, 00:27:09.666 { 00:27:09.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.666 "dma_device_type": 2 00:27:09.666 } 00:27:09.666 ], 00:27:09.666 "driver_specific": { 00:27:09.666 "raid": { 00:27:09.666 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:09.666 "strip_size_kb": 0, 00:27:09.666 "state": "online", 00:27:09.666 "raid_level": "raid1", 00:27:09.666 "superblock": true, 00:27:09.666 "num_base_bdevs": 2, 00:27:09.666 "num_base_bdevs_discovered": 2, 00:27:09.666 "num_base_bdevs_operational": 2, 00:27:09.666 "base_bdevs_list": [ 00:27:09.666 { 00:27:09.666 "name": "pt1", 00:27:09.666 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:09.666 "is_configured": true, 00:27:09.666 "data_offset": 256, 00:27:09.666 "data_size": 7936 00:27:09.666 }, 00:27:09.666 { 00:27:09.666 "name": "pt2", 00:27:09.666 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:09.666 "is_configured": true, 00:27:09.666 "data_offset": 256, 00:27:09.666 "data_size": 7936 00:27:09.666 } 00:27:09.666 ] 00:27:09.666 } 00:27:09.666 } 00:27:09.666 }' 00:27:09.666 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:09.666 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:09.666 pt2' 00:27:09.666 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:09.666 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:09.666 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:09.925 "name": "pt1", 00:27:09.925 "aliases": [ 00:27:09.925 "5e6b982a-aa7a-4854-b2ec-43e4530e561a" 00:27:09.925 ], 00:27:09.925 "product_name": "passthru", 00:27:09.925 "block_size": 4096, 00:27:09.925 "num_blocks": 8192, 00:27:09.925 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:09.925 "assigned_rate_limits": { 00:27:09.925 "rw_ios_per_sec": 0, 00:27:09.925 "rw_mbytes_per_sec": 0, 00:27:09.925 "r_mbytes_per_sec": 0, 00:27:09.925 "w_mbytes_per_sec": 0 00:27:09.925 }, 00:27:09.925 "claimed": true, 00:27:09.925 "claim_type": "exclusive_write", 00:27:09.925 "zoned": false, 00:27:09.925 "supported_io_types": { 00:27:09.925 "read": true, 00:27:09.925 "write": true, 00:27:09.925 "unmap": true, 00:27:09.925 "write_zeroes": true, 00:27:09.925 "flush": true, 00:27:09.925 "reset": true, 00:27:09.925 "compare": false, 00:27:09.925 "compare_and_write": false, 00:27:09.925 "abort": true, 00:27:09.925 "nvme_admin": false, 00:27:09.925 "nvme_io": false 00:27:09.925 }, 00:27:09.925 "memory_domains": [ 00:27:09.925 { 00:27:09.925 "dma_device_id": "system", 00:27:09.925 "dma_device_type": 1 00:27:09.925 }, 00:27:09.925 { 00:27:09.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.925 "dma_device_type": 2 00:27:09.925 } 00:27:09.925 ], 00:27:09.925 "driver_specific": { 00:27:09.925 "passthru": { 00:27:09.925 "name": "pt1", 00:27:09.925 "base_bdev_name": "malloc1" 00:27:09.925 } 00:27:09.925 } 00:27:09.925 }' 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:09.925 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:10.183 "name": "pt2", 00:27:10.183 "aliases": [ 00:27:10.183 "85612c0d-c596-e45a-9074-8d2810934ec7" 00:27:10.183 ], 00:27:10.183 "product_name": "passthru", 00:27:10.183 "block_size": 4096, 00:27:10.183 "num_blocks": 8192, 00:27:10.183 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:10.183 "assigned_rate_limits": { 00:27:10.183 "rw_ios_per_sec": 0, 00:27:10.183 "rw_mbytes_per_sec": 0, 00:27:10.183 "r_mbytes_per_sec": 0, 00:27:10.183 "w_mbytes_per_sec": 0 00:27:10.183 }, 00:27:10.183 "claimed": true, 00:27:10.183 "claim_type": "exclusive_write", 00:27:10.183 "zoned": false, 00:27:10.183 "supported_io_types": { 00:27:10.183 "read": true, 00:27:10.183 "write": true, 00:27:10.183 "unmap": true, 00:27:10.183 "write_zeroes": true, 00:27:10.183 "flush": true, 00:27:10.183 "reset": true, 00:27:10.183 "compare": false, 00:27:10.183 "compare_and_write": false, 00:27:10.183 "abort": true, 00:27:10.183 "nvme_admin": false, 00:27:10.183 "nvme_io": false 00:27:10.183 }, 00:27:10.183 "memory_domains": [ 00:27:10.183 { 00:27:10.183 "dma_device_id": "system", 00:27:10.183 "dma_device_type": 1 00:27:10.183 }, 00:27:10.183 { 00:27:10.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.183 "dma_device_type": 2 00:27:10.183 } 00:27:10.183 ], 00:27:10.183 "driver_specific": { 00:27:10.183 "passthru": { 00:27:10.183 "name": "pt2", 00:27:10.183 "base_bdev_name": "malloc2" 00:27:10.183 } 00:27:10.183 } 00:27:10.183 }' 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:10.183 07:40:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:10.749 [2024-05-16 07:40:03.997402] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.749 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=812e0387-1357-11ef-8e8f-9dd684e56d79 00:27:10.749 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 812e0387-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:10.749 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:10.749 [2024-05-16 07:40:04.237365] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:10.749 [2024-05-16 07:40:04.237388] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.749 [2024-05-16 07:40:04.237408] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.749 [2024-05-16 07:40:04.237422] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.749 [2024-05-16 07:40:04.237426] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a073f00 name raid_bdev1, state offline 00:27:10.749 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.749 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:11.007 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:11.007 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:11.007 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:11.007 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:11.266 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:11.266 07:40:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:11.523 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:11.523 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:11.782 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:12.040 [2024-05-16 07:40:05.505385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:12.040 [2024-05-16 07:40:05.505891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:12.040 [2024-05-16 07:40:05.505915] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:12.040 [2024-05-16 07:40:05.505950] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:12.040 [2024-05-16 07:40:05.505959] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.040 [2024-05-16 07:40:05.505964] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a073c80 name raid_bdev1, state configuring 00:27:12.040 request: 00:27:12.040 { 00:27:12.040 "name": "raid_bdev1", 00:27:12.040 "raid_level": "raid1", 00:27:12.040 "base_bdevs": [ 00:27:12.040 "malloc1", 00:27:12.040 "malloc2" 00:27:12.040 ], 00:27:12.040 "superblock": false, 00:27:12.040 "method": "bdev_raid_create", 00:27:12.040 "req_id": 1 00:27:12.040 } 00:27:12.040 Got JSON-RPC error response 00:27:12.040 response: 00:27:12.040 { 00:27:12.040 "code": -17, 00:27:12.040 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:12.040 } 00:27:12.040 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:27:12.040 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.040 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.040 07:40:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.040 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:12.040 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.297 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:12.297 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:12.297 07:40:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:12.578 [2024-05-16 07:40:06.009434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:12.578 [2024-05-16 07:40:06.009520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.578 [2024-05-16 07:40:06.009562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a073780 00:27:12.578 [2024-05-16 07:40:06.009574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.578 [2024-05-16 07:40:06.010459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.578 [2024-05-16 07:40:06.010496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:12.578 [2024-05-16 07:40:06.010528] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:12.578 [2024-05-16 07:40:06.010544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:12.578 pt1 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.578 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.836 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:12.836 "name": "raid_bdev1", 00:27:12.836 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:12.836 "strip_size_kb": 0, 00:27:12.836 "state": "configuring", 00:27:12.836 "raid_level": "raid1", 00:27:12.836 "superblock": true, 00:27:12.836 "num_base_bdevs": 2, 00:27:12.836 "num_base_bdevs_discovered": 1, 00:27:12.836 "num_base_bdevs_operational": 2, 00:27:12.836 "base_bdevs_list": [ 00:27:12.836 { 00:27:12.836 "name": "pt1", 00:27:12.836 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:12.836 "is_configured": true, 00:27:12.836 "data_offset": 256, 00:27:12.836 "data_size": 7936 00:27:12.836 }, 00:27:12.836 { 00:27:12.836 "name": null, 00:27:12.836 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:12.836 "is_configured": false, 00:27:12.836 "data_offset": 256, 00:27:12.836 "data_size": 7936 00:27:12.836 } 00:27:12.836 ] 00:27:12.836 }' 00:27:12.836 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:12.836 07:40:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.095 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:13.095 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:13.095 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:13.095 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:13.354 [2024-05-16 07:40:06.729409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:13.354 [2024-05-16 07:40:06.729485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.354 [2024-05-16 07:40:06.729523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a073f00 00:27:13.354 [2024-05-16 07:40:06.729531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.354 [2024-05-16 07:40:06.729673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.354 [2024-05-16 07:40:06.729682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:13.354 [2024-05-16 07:40:06.729707] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:13.354 [2024-05-16 07:40:06.729716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:13.354 [2024-05-16 07:40:06.729747] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a074180 00:27:13.354 [2024-05-16 07:40:06.729751] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:13.354 [2024-05-16 07:40:06.729769] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a0d6e20 00:27:13.354 [2024-05-16 07:40:06.729823] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a074180 00:27:13.354 [2024-05-16 07:40:06.729827] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a074180 00:27:13.354 [2024-05-16 07:40:06.729845] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.354 pt2 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.354 07:40:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.612 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:13.612 "name": "raid_bdev1", 00:27:13.612 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:13.612 "strip_size_kb": 0, 00:27:13.612 "state": "online", 00:27:13.612 "raid_level": "raid1", 00:27:13.612 "superblock": true, 00:27:13.612 "num_base_bdevs": 2, 00:27:13.612 "num_base_bdevs_discovered": 2, 00:27:13.612 "num_base_bdevs_operational": 2, 00:27:13.612 "base_bdevs_list": [ 00:27:13.612 { 00:27:13.612 "name": "pt1", 00:27:13.612 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:13.612 "is_configured": true, 00:27:13.612 "data_offset": 256, 00:27:13.612 "data_size": 7936 00:27:13.612 }, 00:27:13.612 { 00:27:13.612 "name": "pt2", 00:27:13.612 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:13.612 "is_configured": true, 00:27:13.612 "data_offset": 256, 00:27:13.612 "data_size": 7936 00:27:13.612 } 00:27:13.612 ] 00:27:13.612 }' 00:27:13.612 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:13.612 07:40:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # local name 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:13.871 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:14.131 [2024-05-16 07:40:07.533417] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.131 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:14.131 "name": "raid_bdev1", 00:27:14.131 "aliases": [ 00:27:14.131 "812e0387-1357-11ef-8e8f-9dd684e56d79" 00:27:14.131 ], 00:27:14.131 "product_name": "Raid Volume", 00:27:14.131 "block_size": 4096, 00:27:14.131 "num_blocks": 7936, 00:27:14.131 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:14.131 "assigned_rate_limits": { 00:27:14.131 "rw_ios_per_sec": 0, 00:27:14.131 "rw_mbytes_per_sec": 0, 00:27:14.131 "r_mbytes_per_sec": 0, 00:27:14.131 "w_mbytes_per_sec": 0 00:27:14.131 }, 00:27:14.131 "claimed": false, 00:27:14.131 "zoned": false, 00:27:14.131 "supported_io_types": { 00:27:14.131 "read": true, 00:27:14.131 "write": true, 00:27:14.131 "unmap": false, 00:27:14.131 "write_zeroes": true, 00:27:14.131 "flush": false, 00:27:14.131 "reset": true, 00:27:14.131 "compare": false, 00:27:14.131 "compare_and_write": false, 00:27:14.131 "abort": false, 00:27:14.131 "nvme_admin": false, 00:27:14.131 "nvme_io": false 00:27:14.131 }, 00:27:14.131 "memory_domains": [ 00:27:14.131 { 00:27:14.131 "dma_device_id": "system", 00:27:14.131 "dma_device_type": 1 00:27:14.131 }, 00:27:14.131 { 00:27:14.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.131 "dma_device_type": 2 00:27:14.131 }, 00:27:14.131 { 00:27:14.131 "dma_device_id": "system", 00:27:14.131 "dma_device_type": 1 00:27:14.131 }, 00:27:14.131 { 00:27:14.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.131 "dma_device_type": 2 00:27:14.131 } 00:27:14.131 ], 00:27:14.131 "driver_specific": { 00:27:14.131 "raid": { 00:27:14.131 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:14.131 "strip_size_kb": 0, 00:27:14.131 "state": "online", 00:27:14.131 "raid_level": "raid1", 00:27:14.131 "superblock": true, 00:27:14.131 "num_base_bdevs": 2, 00:27:14.131 "num_base_bdevs_discovered": 2, 00:27:14.131 "num_base_bdevs_operational": 2, 00:27:14.131 "base_bdevs_list": [ 00:27:14.131 { 00:27:14.131 "name": "pt1", 00:27:14.131 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:14.131 "is_configured": true, 00:27:14.131 "data_offset": 256, 00:27:14.131 "data_size": 7936 00:27:14.131 }, 00:27:14.131 { 00:27:14.131 "name": "pt2", 00:27:14.131 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:14.131 "is_configured": true, 00:27:14.131 "data_offset": 256, 00:27:14.131 "data_size": 7936 00:27:14.131 } 00:27:14.131 ] 00:27:14.131 } 00:27:14.131 } 00:27:14.131 }' 00:27:14.131 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:14.131 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:14.131 pt2' 00:27:14.131 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:14.131 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:14.131 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:14.390 "name": "pt1", 00:27:14.390 "aliases": [ 00:27:14.390 "5e6b982a-aa7a-4854-b2ec-43e4530e561a" 00:27:14.390 ], 00:27:14.390 "product_name": "passthru", 00:27:14.390 "block_size": 4096, 00:27:14.390 "num_blocks": 8192, 00:27:14.390 "uuid": "5e6b982a-aa7a-4854-b2ec-43e4530e561a", 00:27:14.390 "assigned_rate_limits": { 00:27:14.390 "rw_ios_per_sec": 0, 00:27:14.390 "rw_mbytes_per_sec": 0, 00:27:14.390 "r_mbytes_per_sec": 0, 00:27:14.390 "w_mbytes_per_sec": 0 00:27:14.390 }, 00:27:14.390 "claimed": true, 00:27:14.390 "claim_type": "exclusive_write", 00:27:14.390 "zoned": false, 00:27:14.390 "supported_io_types": { 00:27:14.390 "read": true, 00:27:14.390 "write": true, 00:27:14.390 "unmap": true, 00:27:14.390 "write_zeroes": true, 00:27:14.390 "flush": true, 00:27:14.390 "reset": true, 00:27:14.390 "compare": false, 00:27:14.390 "compare_and_write": false, 00:27:14.390 "abort": true, 00:27:14.390 "nvme_admin": false, 00:27:14.390 "nvme_io": false 00:27:14.390 }, 00:27:14.390 "memory_domains": [ 00:27:14.390 { 00:27:14.390 "dma_device_id": "system", 00:27:14.390 "dma_device_type": 1 00:27:14.390 }, 00:27:14.390 { 00:27:14.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.390 "dma_device_type": 2 00:27:14.390 } 00:27:14.390 ], 00:27:14.390 "driver_specific": { 00:27:14.390 "passthru": { 00:27:14.390 "name": "pt1", 00:27:14.390 "base_bdev_name": "malloc1" 00:27:14.390 } 00:27:14.390 } 00:27:14.390 }' 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:14.390 07:40:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:14.649 "name": "pt2", 00:27:14.649 "aliases": [ 00:27:14.649 "85612c0d-c596-e45a-9074-8d2810934ec7" 00:27:14.649 ], 00:27:14.649 "product_name": "passthru", 00:27:14.649 "block_size": 4096, 00:27:14.649 "num_blocks": 8192, 00:27:14.649 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:14.649 "assigned_rate_limits": { 00:27:14.649 "rw_ios_per_sec": 0, 00:27:14.649 "rw_mbytes_per_sec": 0, 00:27:14.649 "r_mbytes_per_sec": 0, 00:27:14.649 "w_mbytes_per_sec": 0 00:27:14.649 }, 00:27:14.649 "claimed": true, 00:27:14.649 "claim_type": "exclusive_write", 00:27:14.649 "zoned": false, 00:27:14.649 "supported_io_types": { 00:27:14.649 "read": true, 00:27:14.649 "write": true, 00:27:14.649 "unmap": true, 00:27:14.649 "write_zeroes": true, 00:27:14.649 "flush": true, 00:27:14.649 "reset": true, 00:27:14.649 "compare": false, 00:27:14.649 "compare_and_write": false, 00:27:14.649 "abort": true, 00:27:14.649 "nvme_admin": false, 00:27:14.649 "nvme_io": false 00:27:14.649 }, 00:27:14.649 "memory_domains": [ 00:27:14.649 { 00:27:14.649 "dma_device_id": "system", 00:27:14.649 "dma_device_type": 1 00:27:14.649 }, 00:27:14.649 { 00:27:14.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.649 "dma_device_type": 2 00:27:14.649 } 00:27:14.649 ], 00:27:14.649 "driver_specific": { 00:27:14.649 "passthru": { 00:27:14.649 "name": "pt2", 00:27:14.649 "base_bdev_name": "malloc2" 00:27:14.649 } 00:27:14.649 } 00:27:14.649 }' 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@209 -- # [[ null == null ]] 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:14.649 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:14.908 [2024-05-16 07:40:08.389462] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.908 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 812e0387-1357-11ef-8e8f-9dd684e56d79 '!=' 812e0387-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:14.908 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:14.908 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:14.908 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@215 -- # return 0 00:27:14.908 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:15.167 [2024-05-16 07:40:08.609451] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.167 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.425 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:15.425 "name": "raid_bdev1", 00:27:15.425 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:15.425 "strip_size_kb": 0, 00:27:15.425 "state": "online", 00:27:15.425 "raid_level": "raid1", 00:27:15.425 "superblock": true, 00:27:15.425 "num_base_bdevs": 2, 00:27:15.425 "num_base_bdevs_discovered": 1, 00:27:15.425 "num_base_bdevs_operational": 1, 00:27:15.425 "base_bdevs_list": [ 00:27:15.425 { 00:27:15.425 "name": null, 00:27:15.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.425 "is_configured": false, 00:27:15.425 "data_offset": 256, 00:27:15.425 "data_size": 7936 00:27:15.425 }, 00:27:15.425 { 00:27:15.425 "name": "pt2", 00:27:15.425 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:15.425 "is_configured": true, 00:27:15.425 "data_offset": 256, 00:27:15.425 "data_size": 7936 00:27:15.425 } 00:27:15.425 ] 00:27:15.425 }' 00:27:15.425 07:40:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:15.425 07:40:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.684 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:15.942 [2024-05-16 07:40:09.429422] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:15.942 [2024-05-16 07:40:09.429448] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:15.942 [2024-05-16 07:40:09.429468] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:15.942 [2024-05-16 07:40:09.429480] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:15.942 [2024-05-16 07:40:09.429484] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a074180 name raid_bdev1, state offline 00:27:15.942 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:15.942 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.507 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:16.507 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:16.507 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:16.507 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:16.507 07:40:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:16.507 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:16.507 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:16.507 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:16.507 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:16.507 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:27:16.507 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:16.766 [2024-05-16 07:40:10.249423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:16.766 [2024-05-16 07:40:10.249477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.766 [2024-05-16 07:40:10.249504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a073f00 00:27:16.766 [2024-05-16 07:40:10.249512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.766 [2024-05-16 07:40:10.250044] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.766 [2024-05-16 07:40:10.250068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:16.766 [2024-05-16 07:40:10.250089] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:16.766 [2024-05-16 07:40:10.250099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:16.766 [2024-05-16 07:40:10.250119] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a074180 00:27:16.766 [2024-05-16 07:40:10.250139] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:16.766 [2024-05-16 07:40:10.250159] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a0d6e20 00:27:16.766 [2024-05-16 07:40:10.250196] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a074180 00:27:16.767 [2024-05-16 07:40:10.250199] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a074180 00:27:16.767 [2024-05-16 07:40:10.250218] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.767 pt2 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.767 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.025 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:17.025 "name": "raid_bdev1", 00:27:17.025 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:17.025 "strip_size_kb": 0, 00:27:17.025 "state": "online", 00:27:17.025 "raid_level": "raid1", 00:27:17.025 "superblock": true, 00:27:17.025 "num_base_bdevs": 2, 00:27:17.025 "num_base_bdevs_discovered": 1, 00:27:17.025 "num_base_bdevs_operational": 1, 00:27:17.025 "base_bdevs_list": [ 00:27:17.025 { 00:27:17.025 "name": null, 00:27:17.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.025 "is_configured": false, 00:27:17.025 "data_offset": 256, 00:27:17.025 "data_size": 7936 00:27:17.025 }, 00:27:17.025 { 00:27:17.025 "name": "pt2", 00:27:17.025 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:17.025 "is_configured": true, 00:27:17.025 "data_offset": 256, 00:27:17.025 "data_size": 7936 00:27:17.025 } 00:27:17.025 ] 00:27:17.025 }' 00:27:17.025 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:17.025 07:40:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:17.283 07:40:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:17.851 [2024-05-16 07:40:11.125423] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:17.851 [2024-05-16 07:40:11.125459] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:17.851 [2024-05-16 07:40:11.125472] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:17.851 [2024-05-16 07:40:11.125481] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:17.851 [2024-05-16 07:40:11.125485] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a074180 name raid_bdev1, state offline 00:27:17.851 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.851 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:17.851 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:17.851 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:17.851 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:17.851 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:18.109 [2024-05-16 07:40:11.581430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:18.109 [2024-05-16 07:40:11.581479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.109 [2024-05-16 07:40:11.581504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a073c80 00:27:18.109 [2024-05-16 07:40:11.581512] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.109 [2024-05-16 07:40:11.581997] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.109 [2024-05-16 07:40:11.582027] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:18.109 [2024-05-16 07:40:11.582046] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:18.109 [2024-05-16 07:40:11.582056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:18.109 [2024-05-16 07:40:11.582078] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:18.109 [2024-05-16 07:40:11.582082] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.109 [2024-05-16 07:40:11.582086] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a073780 name raid_bdev1, state configuring 00:27:18.109 [2024-05-16 07:40:11.582109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:18.109 [2024-05-16 07:40:11.582122] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a073780 00:27:18.109 [2024-05-16 07:40:11.582125] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:18.109 [2024-05-16 07:40:11.582144] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a0d6e20 00:27:18.109 [2024-05-16 07:40:11.582180] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a073780 00:27:18.109 [2024-05-16 07:40:11.582183] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a073780 00:27:18.109 [2024-05-16 07:40:11.582200] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.109 pt1 00:27:18.109 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:18.109 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:18.109 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.110 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.368 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:18.368 "name": "raid_bdev1", 00:27:18.368 "uuid": "812e0387-1357-11ef-8e8f-9dd684e56d79", 00:27:18.368 "strip_size_kb": 0, 00:27:18.368 "state": "online", 00:27:18.368 "raid_level": "raid1", 00:27:18.368 "superblock": true, 00:27:18.368 "num_base_bdevs": 2, 00:27:18.368 "num_base_bdevs_discovered": 1, 00:27:18.368 "num_base_bdevs_operational": 1, 00:27:18.368 "base_bdevs_list": [ 00:27:18.368 { 00:27:18.368 "name": null, 00:27:18.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.368 "is_configured": false, 00:27:18.368 "data_offset": 256, 00:27:18.368 "data_size": 7936 00:27:18.368 }, 00:27:18.368 { 00:27:18.368 "name": "pt2", 00:27:18.368 "uuid": "85612c0d-c596-e45a-9074-8d2810934ec7", 00:27:18.368 "is_configured": true, 00:27:18.368 "data_offset": 256, 00:27:18.368 "data_size": 7936 00:27:18.368 } 00:27:18.368 ] 00:27:18.368 }' 00:27:18.368 07:40:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:18.368 07:40:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.625 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:18.625 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:18.882 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:18.882 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:18.882 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:19.141 [2024-05-16 07:40:12.561472] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 812e0387-1357-11ef-8e8f-9dd684e56d79 '!=' 812e0387-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 64235 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 64235 ']' 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 64235 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # tail -1 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps -c -o command 64235 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:27:19.142 killing process with pid 64235 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64235' 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 64235 00:27:19.142 [2024-05-16 07:40:12.591128] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.142 [2024-05-16 07:40:12.591145] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.142 [2024-05-16 07:40:12.591165] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.142 [2024-05-16 07:40:12.591169] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a073780 name raid_bdev1, state offline 00:27:19.142 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 64235 00:27:19.142 [2024-05-16 07:40:12.600736] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.408 07:40:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:27:19.408 00:27:19.408 real 0m12.889s 00:27:19.408 user 0m23.061s 00:27:19.408 sys 0m1.965s 00:27:19.408 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:19.408 07:40:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.408 ************************************ 00:27:19.408 END TEST raid_superblock_test_4k 00:27:19.408 ************************************ 00:27:19.408 07:40:12 bdev_raid -- bdev/bdev_raid.sh@834 -- # '[' '' = true ']' 00:27:19.408 07:40:12 bdev_raid -- bdev/bdev_raid.sh@838 -- # base_malloc_params='-m 32' 00:27:19.408 07:40:12 bdev_raid -- bdev/bdev_raid.sh@839 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:19.408 07:40:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:19.408 07:40:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:19.408 07:40:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.408 ************************************ 00:27:19.408 START TEST raid_state_function_test_sb_md_separate 00:27:19.408 ************************************ 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # raid_pid=64622 00:27:19.408 Process raid pid: 64622 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 64622' 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@247 -- # waitforlisten 64622 /var/tmp/spdk-raid.sock 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 64622 ']' 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:19.408 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:19.409 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:19.409 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.409 07:40:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:19.409 [2024-05-16 07:40:12.825765] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:19.409 [2024-05-16 07:40:12.825998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:27:19.975 EAL: TSC is not safe to use in SMP mode 00:27:19.975 EAL: TSC is not invariant 00:27:19.975 [2024-05-16 07:40:13.305166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.975 [2024-05-16 07:40:13.403270] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:19.975 [2024-05-16 07:40:13.405707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.975 [2024-05-16 07:40:13.406660] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:19.975 [2024-05-16 07:40:13.406681] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.543 07:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.543 07:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:27:20.543 07:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:20.801 [2024-05-16 07:40:14.099457] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:20.801 [2024-05-16 07:40:14.099521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:20.801 [2024-05-16 07:40:14.099526] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:20.801 [2024-05-16 07:40:14.099533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.801 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:21.058 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:21.058 "name": "Existed_Raid", 00:27:21.058 "uuid": "8843afd7-1357-11ef-8e8f-9dd684e56d79", 00:27:21.058 "strip_size_kb": 0, 00:27:21.058 "state": "configuring", 00:27:21.058 "raid_level": "raid1", 00:27:21.058 "superblock": true, 00:27:21.058 "num_base_bdevs": 2, 00:27:21.058 "num_base_bdevs_discovered": 0, 00:27:21.058 "num_base_bdevs_operational": 2, 00:27:21.058 "base_bdevs_list": [ 00:27:21.058 { 00:27:21.058 "name": "BaseBdev1", 00:27:21.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.058 "is_configured": false, 00:27:21.058 "data_offset": 0, 00:27:21.058 "data_size": 0 00:27:21.058 }, 00:27:21.058 { 00:27:21.058 "name": "BaseBdev2", 00:27:21.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.058 "is_configured": false, 00:27:21.058 "data_offset": 0, 00:27:21.058 "data_size": 0 00:27:21.058 } 00:27:21.058 ] 00:27:21.058 }' 00:27:21.058 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:21.058 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:21.316 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:21.574 [2024-05-16 07:40:14.887439] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:21.574 [2024-05-16 07:40:14.887465] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa42500 name Existed_Raid, state configuring 00:27:21.574 07:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:21.832 [2024-05-16 07:40:15.167444] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:21.833 [2024-05-16 07:40:15.167486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:21.833 [2024-05-16 07:40:15.167490] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:21.833 [2024-05-16 07:40:15.167497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:21.833 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:22.090 [2024-05-16 07:40:15.464301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.090 BaseBdev1 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:22.090 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:22.348 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:22.607 [ 00:27:22.607 { 00:27:22.607 "name": "BaseBdev1", 00:27:22.607 "aliases": [ 00:27:22.607 "8913d107-1357-11ef-8e8f-9dd684e56d79" 00:27:22.607 ], 00:27:22.607 "product_name": "Malloc disk", 00:27:22.607 "block_size": 4096, 00:27:22.607 "num_blocks": 8192, 00:27:22.607 "uuid": "8913d107-1357-11ef-8e8f-9dd684e56d79", 00:27:22.607 "md_size": 32, 00:27:22.607 "md_interleave": false, 00:27:22.607 "dif_type": 0, 00:27:22.607 "assigned_rate_limits": { 00:27:22.607 "rw_ios_per_sec": 0, 00:27:22.607 "rw_mbytes_per_sec": 0, 00:27:22.607 "r_mbytes_per_sec": 0, 00:27:22.607 "w_mbytes_per_sec": 0 00:27:22.607 }, 00:27:22.607 "claimed": true, 00:27:22.607 "claim_type": "exclusive_write", 00:27:22.607 "zoned": false, 00:27:22.607 "supported_io_types": { 00:27:22.607 "read": true, 00:27:22.607 "write": true, 00:27:22.607 "unmap": true, 00:27:22.607 "write_zeroes": true, 00:27:22.607 "flush": true, 00:27:22.607 "reset": true, 00:27:22.607 "compare": false, 00:27:22.607 "compare_and_write": false, 00:27:22.607 "abort": true, 00:27:22.607 "nvme_admin": false, 00:27:22.607 "nvme_io": false 00:27:22.607 }, 00:27:22.607 "memory_domains": [ 00:27:22.607 { 00:27:22.607 "dma_device_id": "system", 00:27:22.607 "dma_device_type": 1 00:27:22.607 }, 00:27:22.607 { 00:27:22.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:22.607 "dma_device_type": 2 00:27:22.607 } 00:27:22.607 ], 00:27:22.607 "driver_specific": {} 00:27:22.607 } 00:27:22.607 ] 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:22.607 07:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.866 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:22.866 "name": "Existed_Raid", 00:27:22.866 "uuid": "88e6a61f-1357-11ef-8e8f-9dd684e56d79", 00:27:22.866 "strip_size_kb": 0, 00:27:22.866 "state": "configuring", 00:27:22.867 "raid_level": "raid1", 00:27:22.867 "superblock": true, 00:27:22.867 "num_base_bdevs": 2, 00:27:22.867 "num_base_bdevs_discovered": 1, 00:27:22.867 "num_base_bdevs_operational": 2, 00:27:22.867 "base_bdevs_list": [ 00:27:22.867 { 00:27:22.867 "name": "BaseBdev1", 00:27:22.867 "uuid": "8913d107-1357-11ef-8e8f-9dd684e56d79", 00:27:22.867 "is_configured": true, 00:27:22.867 "data_offset": 256, 00:27:22.867 "data_size": 7936 00:27:22.867 }, 00:27:22.867 { 00:27:22.867 "name": "BaseBdev2", 00:27:22.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.867 "is_configured": false, 00:27:22.867 "data_offset": 0, 00:27:22.867 "data_size": 0 00:27:22.867 } 00:27:22.867 ] 00:27:22.867 }' 00:27:22.867 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:22.867 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:23.125 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:23.384 [2024-05-16 07:40:16.743430] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:23.384 [2024-05-16 07:40:16.743460] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa42500 name Existed_Raid, state configuring 00:27:23.384 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:23.643 [2024-05-16 07:40:16.963445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:23.643 [2024-05-16 07:40:16.964174] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:23.643 [2024-05-16 07:40:16.964220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.643 07:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:23.900 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:23.900 "name": "Existed_Raid", 00:27:23.900 "uuid": "89f8b242-1357-11ef-8e8f-9dd684e56d79", 00:27:23.900 "strip_size_kb": 0, 00:27:23.900 "state": "configuring", 00:27:23.900 "raid_level": "raid1", 00:27:23.900 "superblock": true, 00:27:23.900 "num_base_bdevs": 2, 00:27:23.900 "num_base_bdevs_discovered": 1, 00:27:23.900 "num_base_bdevs_operational": 2, 00:27:23.900 "base_bdevs_list": [ 00:27:23.900 { 00:27:23.900 "name": "BaseBdev1", 00:27:23.900 "uuid": "8913d107-1357-11ef-8e8f-9dd684e56d79", 00:27:23.900 "is_configured": true, 00:27:23.900 "data_offset": 256, 00:27:23.900 "data_size": 7936 00:27:23.900 }, 00:27:23.900 { 00:27:23.900 "name": "BaseBdev2", 00:27:23.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.900 "is_configured": false, 00:27:23.900 "data_offset": 0, 00:27:23.900 "data_size": 0 00:27:23.900 } 00:27:23.901 ] 00:27:23.901 }' 00:27:23.901 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:23.901 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:24.418 [2024-05-16 07:40:17.775521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:24.418 [2024-05-16 07:40:17.775574] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa42a00 00:27:24.418 [2024-05-16 07:40:17.775579] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:24.418 [2024-05-16 07:40:17.775598] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aaa5e20 00:27:24.418 [2024-05-16 07:40:17.775623] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa42a00 00:27:24.418 [2024-05-16 07:40:17.775627] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82aa42a00 00:27:24.418 [2024-05-16 07:40:17.775639] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.418 BaseBdev2 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:24.418 07:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:24.676 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:24.934 [ 00:27:24.934 { 00:27:24.934 "name": "BaseBdev2", 00:27:24.934 "aliases": [ 00:27:24.934 "8a7499d1-1357-11ef-8e8f-9dd684e56d79" 00:27:24.934 ], 00:27:24.934 "product_name": "Malloc disk", 00:27:24.934 "block_size": 4096, 00:27:24.934 "num_blocks": 8192, 00:27:24.934 "uuid": "8a7499d1-1357-11ef-8e8f-9dd684e56d79", 00:27:24.934 "md_size": 32, 00:27:24.934 "md_interleave": false, 00:27:24.934 "dif_type": 0, 00:27:24.934 "assigned_rate_limits": { 00:27:24.934 "rw_ios_per_sec": 0, 00:27:24.934 "rw_mbytes_per_sec": 0, 00:27:24.934 "r_mbytes_per_sec": 0, 00:27:24.934 "w_mbytes_per_sec": 0 00:27:24.934 }, 00:27:24.934 "claimed": true, 00:27:24.934 "claim_type": "exclusive_write", 00:27:24.934 "zoned": false, 00:27:24.934 "supported_io_types": { 00:27:24.934 "read": true, 00:27:24.934 "write": true, 00:27:24.934 "unmap": true, 00:27:24.934 "write_zeroes": true, 00:27:24.934 "flush": true, 00:27:24.934 "reset": true, 00:27:24.934 "compare": false, 00:27:24.934 "compare_and_write": false, 00:27:24.934 "abort": true, 00:27:24.934 "nvme_admin": false, 00:27:24.934 "nvme_io": false 00:27:24.934 }, 00:27:24.934 "memory_domains": [ 00:27:24.934 { 00:27:24.934 "dma_device_id": "system", 00:27:24.934 "dma_device_type": 1 00:27:24.934 }, 00:27:24.934 { 00:27:24.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:24.934 "dma_device_type": 2 00:27:24.934 } 00:27:24.934 ], 00:27:24.934 "driver_specific": {} 00:27:24.934 } 00:27:24.934 ] 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.934 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.192 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:25.192 "name": "Existed_Raid", 00:27:25.192 "uuid": "89f8b242-1357-11ef-8e8f-9dd684e56d79", 00:27:25.192 "strip_size_kb": 0, 00:27:25.192 "state": "online", 00:27:25.192 "raid_level": "raid1", 00:27:25.192 "superblock": true, 00:27:25.192 "num_base_bdevs": 2, 00:27:25.192 "num_base_bdevs_discovered": 2, 00:27:25.192 "num_base_bdevs_operational": 2, 00:27:25.192 "base_bdevs_list": [ 00:27:25.192 { 00:27:25.192 "name": "BaseBdev1", 00:27:25.192 "uuid": "8913d107-1357-11ef-8e8f-9dd684e56d79", 00:27:25.192 "is_configured": true, 00:27:25.192 "data_offset": 256, 00:27:25.192 "data_size": 7936 00:27:25.192 }, 00:27:25.192 { 00:27:25.192 "name": "BaseBdev2", 00:27:25.192 "uuid": "8a7499d1-1357-11ef-8e8f-9dd684e56d79", 00:27:25.192 "is_configured": true, 00:27:25.192 "data_offset": 256, 00:27:25.192 "data_size": 7936 00:27:25.192 } 00:27:25.192 ] 00:27:25.192 }' 00:27:25.192 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:25.192 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:25.471 07:40:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:25.744 [2024-05-16 07:40:19.091489] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:25.744 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:25.744 "name": "Existed_Raid", 00:27:25.744 "aliases": [ 00:27:25.744 "89f8b242-1357-11ef-8e8f-9dd684e56d79" 00:27:25.744 ], 00:27:25.744 "product_name": "Raid Volume", 00:27:25.744 "block_size": 4096, 00:27:25.744 "num_blocks": 7936, 00:27:25.744 "uuid": "89f8b242-1357-11ef-8e8f-9dd684e56d79", 00:27:25.744 "md_size": 32, 00:27:25.744 "md_interleave": false, 00:27:25.744 "dif_type": 0, 00:27:25.744 "assigned_rate_limits": { 00:27:25.744 "rw_ios_per_sec": 0, 00:27:25.744 "rw_mbytes_per_sec": 0, 00:27:25.744 "r_mbytes_per_sec": 0, 00:27:25.744 "w_mbytes_per_sec": 0 00:27:25.745 }, 00:27:25.745 "claimed": false, 00:27:25.745 "zoned": false, 00:27:25.745 "supported_io_types": { 00:27:25.745 "read": true, 00:27:25.745 "write": true, 00:27:25.745 "unmap": false, 00:27:25.745 "write_zeroes": true, 00:27:25.745 "flush": false, 00:27:25.745 "reset": true, 00:27:25.745 "compare": false, 00:27:25.745 "compare_and_write": false, 00:27:25.745 "abort": false, 00:27:25.745 "nvme_admin": false, 00:27:25.745 "nvme_io": false 00:27:25.745 }, 00:27:25.745 "memory_domains": [ 00:27:25.745 { 00:27:25.745 "dma_device_id": "system", 00:27:25.745 "dma_device_type": 1 00:27:25.745 }, 00:27:25.745 { 00:27:25.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.745 "dma_device_type": 2 00:27:25.745 }, 00:27:25.745 { 00:27:25.745 "dma_device_id": "system", 00:27:25.745 "dma_device_type": 1 00:27:25.745 }, 00:27:25.745 { 00:27:25.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.745 "dma_device_type": 2 00:27:25.745 } 00:27:25.745 ], 00:27:25.745 "driver_specific": { 00:27:25.745 "raid": { 00:27:25.745 "uuid": "89f8b242-1357-11ef-8e8f-9dd684e56d79", 00:27:25.745 "strip_size_kb": 0, 00:27:25.745 "state": "online", 00:27:25.745 "raid_level": "raid1", 00:27:25.745 "superblock": true, 00:27:25.745 "num_base_bdevs": 2, 00:27:25.745 "num_base_bdevs_discovered": 2, 00:27:25.745 "num_base_bdevs_operational": 2, 00:27:25.745 "base_bdevs_list": [ 00:27:25.745 { 00:27:25.745 "name": "BaseBdev1", 00:27:25.745 "uuid": "8913d107-1357-11ef-8e8f-9dd684e56d79", 00:27:25.745 "is_configured": true, 00:27:25.745 "data_offset": 256, 00:27:25.745 "data_size": 7936 00:27:25.745 }, 00:27:25.745 { 00:27:25.745 "name": "BaseBdev2", 00:27:25.745 "uuid": "8a7499d1-1357-11ef-8e8f-9dd684e56d79", 00:27:25.745 "is_configured": true, 00:27:25.745 "data_offset": 256, 00:27:25.745 "data_size": 7936 00:27:25.745 } 00:27:25.745 ] 00:27:25.745 } 00:27:25.745 } 00:27:25.745 }' 00:27:25.745 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:25.745 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:27:25.745 BaseBdev2' 00:27:25.745 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:25.745 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:25.745 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:26.003 "name": "BaseBdev1", 00:27:26.003 "aliases": [ 00:27:26.003 "8913d107-1357-11ef-8e8f-9dd684e56d79" 00:27:26.003 ], 00:27:26.003 "product_name": "Malloc disk", 00:27:26.003 "block_size": 4096, 00:27:26.003 "num_blocks": 8192, 00:27:26.003 "uuid": "8913d107-1357-11ef-8e8f-9dd684e56d79", 00:27:26.003 "md_size": 32, 00:27:26.003 "md_interleave": false, 00:27:26.003 "dif_type": 0, 00:27:26.003 "assigned_rate_limits": { 00:27:26.003 "rw_ios_per_sec": 0, 00:27:26.003 "rw_mbytes_per_sec": 0, 00:27:26.003 "r_mbytes_per_sec": 0, 00:27:26.003 "w_mbytes_per_sec": 0 00:27:26.003 }, 00:27:26.003 "claimed": true, 00:27:26.003 "claim_type": "exclusive_write", 00:27:26.003 "zoned": false, 00:27:26.003 "supported_io_types": { 00:27:26.003 "read": true, 00:27:26.003 "write": true, 00:27:26.003 "unmap": true, 00:27:26.003 "write_zeroes": true, 00:27:26.003 "flush": true, 00:27:26.003 "reset": true, 00:27:26.003 "compare": false, 00:27:26.003 "compare_and_write": false, 00:27:26.003 "abort": true, 00:27:26.003 "nvme_admin": false, 00:27:26.003 "nvme_io": false 00:27:26.003 }, 00:27:26.003 "memory_domains": [ 00:27:26.003 { 00:27:26.003 "dma_device_id": "system", 00:27:26.003 "dma_device_type": 1 00:27:26.003 }, 00:27:26.003 { 00:27:26.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.003 "dma_device_type": 2 00:27:26.003 } 00:27:26.003 ], 00:27:26.003 "driver_specific": {} 00:27:26.003 }' 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:26.003 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:26.262 "name": "BaseBdev2", 00:27:26.262 "aliases": [ 00:27:26.262 "8a7499d1-1357-11ef-8e8f-9dd684e56d79" 00:27:26.262 ], 00:27:26.262 "product_name": "Malloc disk", 00:27:26.262 "block_size": 4096, 00:27:26.262 "num_blocks": 8192, 00:27:26.262 "uuid": "8a7499d1-1357-11ef-8e8f-9dd684e56d79", 00:27:26.262 "md_size": 32, 00:27:26.262 "md_interleave": false, 00:27:26.262 "dif_type": 0, 00:27:26.262 "assigned_rate_limits": { 00:27:26.262 "rw_ios_per_sec": 0, 00:27:26.262 "rw_mbytes_per_sec": 0, 00:27:26.262 "r_mbytes_per_sec": 0, 00:27:26.262 "w_mbytes_per_sec": 0 00:27:26.262 }, 00:27:26.262 "claimed": true, 00:27:26.262 "claim_type": "exclusive_write", 00:27:26.262 "zoned": false, 00:27:26.262 "supported_io_types": { 00:27:26.262 "read": true, 00:27:26.262 "write": true, 00:27:26.262 "unmap": true, 00:27:26.262 "write_zeroes": true, 00:27:26.262 "flush": true, 00:27:26.262 "reset": true, 00:27:26.262 "compare": false, 00:27:26.262 "compare_and_write": false, 00:27:26.262 "abort": true, 00:27:26.262 "nvme_admin": false, 00:27:26.262 "nvme_io": false 00:27:26.262 }, 00:27:26.262 "memory_domains": [ 00:27:26.262 { 00:27:26.262 "dma_device_id": "system", 00:27:26.262 "dma_device_type": 1 00:27:26.262 }, 00:27:26.262 { 00:27:26.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.262 "dma_device_type": 2 00:27:26.262 } 00:27:26.262 ], 00:27:26.262 "driver_specific": {} 00:27:26.262 }' 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:26.262 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:26.520 [2024-05-16 07:40:19.919476] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # local expected_state 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.520 07:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.778 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:26.778 "name": "Existed_Raid", 00:27:26.778 "uuid": "89f8b242-1357-11ef-8e8f-9dd684e56d79", 00:27:26.778 "strip_size_kb": 0, 00:27:26.778 "state": "online", 00:27:26.778 "raid_level": "raid1", 00:27:26.778 "superblock": true, 00:27:26.778 "num_base_bdevs": 2, 00:27:26.778 "num_base_bdevs_discovered": 1, 00:27:26.778 "num_base_bdevs_operational": 1, 00:27:26.778 "base_bdevs_list": [ 00:27:26.778 { 00:27:26.778 "name": null, 00:27:26.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.778 "is_configured": false, 00:27:26.778 "data_offset": 256, 00:27:26.778 "data_size": 7936 00:27:26.778 }, 00:27:26.778 { 00:27:26.778 "name": "BaseBdev2", 00:27:26.778 "uuid": "8a7499d1-1357-11ef-8e8f-9dd684e56d79", 00:27:26.778 "is_configured": true, 00:27:26.778 "data_offset": 256, 00:27:26.778 "data_size": 7936 00:27:26.778 } 00:27:26.778 ] 00:27:26.778 }' 00:27:26.778 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:26.778 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.036 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:27.036 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:27.036 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.036 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:27.295 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:27.295 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:27.295 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:27.553 [2024-05-16 07:40:20.888346] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:27.553 [2024-05-16 07:40:20.888384] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:27.553 [2024-05-16 07:40:20.893317] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:27.553 [2024-05-16 07:40:20.893335] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:27.553 [2024-05-16 07:40:20.893339] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa42a00 name Existed_Raid, state offline 00:27:27.553 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:27.553 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:27.553 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:27:27.553 07:40:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@342 -- # killprocess 64622 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 64622 ']' 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 64622 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps -c -o command 64622 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # tail -1 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:27:27.810 killing process with pid 64622 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64622' 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 64622 00:27:27.810 [2024-05-16 07:40:21.132682] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:27.810 [2024-05-16 07:40:21.132730] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 64622 00:27:27.810 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@344 -- # return 0 00:27:27.810 00:27:27.811 real 0m8.493s 00:27:27.811 user 0m14.717s 00:27:27.811 sys 0m1.523s 00:27:27.811 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:27.811 07:40:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.811 ************************************ 00:27:27.811 END TEST raid_state_function_test_sb_md_separate 00:27:27.811 ************************************ 00:27:27.811 07:40:21 bdev_raid -- bdev/bdev_raid.sh@840 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:27.811 07:40:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:27:27.811 07:40:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:27.811 07:40:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:27.811 ************************************ 00:27:27.811 START TEST raid_superblock_test_md_separate 00:27:27.811 ************************************ 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=64892 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 64892 /var/tmp/spdk-raid.sock 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 64892 ']' 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:27.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.811 07:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:28.068 [2024-05-16 07:40:21.362984] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:28.068 [2024-05-16 07:40:21.363229] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:27:28.326 EAL: TSC is not safe to use in SMP mode 00:27:28.326 EAL: TSC is not invariant 00:27:28.326 [2024-05-16 07:40:21.839502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.584 [2024-05-16 07:40:21.923023] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:28.584 [2024-05-16 07:40:21.925476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.584 [2024-05-16 07:40:21.926317] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:28.584 [2024-05-16 07:40:21.926334] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:29.151 malloc1 00:27:29.151 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:29.409 [2024-05-16 07:40:22.873227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:29.409 [2024-05-16 07:40:22.873286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.409 [2024-05-16 07:40:22.874034] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a462780 00:27:29.409 [2024-05-16 07:40:22.874067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.409 [2024-05-16 07:40:22.874869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.409 [2024-05-16 07:40:22.874906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:29.409 pt1 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:29.409 07:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:29.667 malloc2 00:27:29.667 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:29.926 [2024-05-16 07:40:23.297231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:29.926 [2024-05-16 07:40:23.297288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.926 [2024-05-16 07:40:23.297319] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a462c80 00:27:29.926 [2024-05-16 07:40:23.297330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.926 [2024-05-16 07:40:23.297873] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.926 [2024-05-16 07:40:23.297908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:29.926 pt2 00:27:29.926 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:29.926 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:29.926 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:30.185 [2024-05-16 07:40:23.585269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:30.185 [2024-05-16 07:40:23.585741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:30.185 [2024-05-16 07:40:23.585795] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a462f00 00:27:30.185 [2024-05-16 07:40:23.585800] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:30.185 [2024-05-16 07:40:23.585835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a4c5e20 00:27:30.185 [2024-05-16 07:40:23.585862] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a462f00 00:27:30.185 [2024-05-16 07:40:23.585866] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a462f00 00:27:30.185 [2024-05-16 07:40:23.585881] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.185 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.444 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:30.444 "name": "raid_bdev1", 00:27:30.444 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:30.444 "strip_size_kb": 0, 00:27:30.444 "state": "online", 00:27:30.444 "raid_level": "raid1", 00:27:30.444 "superblock": true, 00:27:30.444 "num_base_bdevs": 2, 00:27:30.444 "num_base_bdevs_discovered": 2, 00:27:30.444 "num_base_bdevs_operational": 2, 00:27:30.444 "base_bdevs_list": [ 00:27:30.444 { 00:27:30.444 "name": "pt1", 00:27:30.444 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:30.444 "is_configured": true, 00:27:30.444 "data_offset": 256, 00:27:30.444 "data_size": 7936 00:27:30.444 }, 00:27:30.444 { 00:27:30.444 "name": "pt2", 00:27:30.444 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:30.444 "is_configured": true, 00:27:30.444 "data_offset": 256, 00:27:30.444 "data_size": 7936 00:27:30.444 } 00:27:30.444 ] 00:27:30.444 }' 00:27:30.444 07:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:30.444 07:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:30.701 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:30.959 [2024-05-16 07:40:24.389263] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:30.959 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:30.959 "name": "raid_bdev1", 00:27:30.959 "aliases": [ 00:27:30.959 "8deb1b19-1357-11ef-8e8f-9dd684e56d79" 00:27:30.959 ], 00:27:30.959 "product_name": "Raid Volume", 00:27:30.959 "block_size": 4096, 00:27:30.959 "num_blocks": 7936, 00:27:30.959 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:30.959 "md_size": 32, 00:27:30.959 "md_interleave": false, 00:27:30.959 "dif_type": 0, 00:27:30.959 "assigned_rate_limits": { 00:27:30.959 "rw_ios_per_sec": 0, 00:27:30.959 "rw_mbytes_per_sec": 0, 00:27:30.959 "r_mbytes_per_sec": 0, 00:27:30.959 "w_mbytes_per_sec": 0 00:27:30.959 }, 00:27:30.959 "claimed": false, 00:27:30.959 "zoned": false, 00:27:30.959 "supported_io_types": { 00:27:30.959 "read": true, 00:27:30.959 "write": true, 00:27:30.959 "unmap": false, 00:27:30.959 "write_zeroes": true, 00:27:30.959 "flush": false, 00:27:30.959 "reset": true, 00:27:30.959 "compare": false, 00:27:30.959 "compare_and_write": false, 00:27:30.959 "abort": false, 00:27:30.959 "nvme_admin": false, 00:27:30.959 "nvme_io": false 00:27:30.959 }, 00:27:30.959 "memory_domains": [ 00:27:30.959 { 00:27:30.959 "dma_device_id": "system", 00:27:30.959 "dma_device_type": 1 00:27:30.959 }, 00:27:30.959 { 00:27:30.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.959 "dma_device_type": 2 00:27:30.959 }, 00:27:30.959 { 00:27:30.959 "dma_device_id": "system", 00:27:30.959 "dma_device_type": 1 00:27:30.959 }, 00:27:30.959 { 00:27:30.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:30.959 "dma_device_type": 2 00:27:30.959 } 00:27:30.959 ], 00:27:30.959 "driver_specific": { 00:27:30.959 "raid": { 00:27:30.959 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:30.959 "strip_size_kb": 0, 00:27:30.959 "state": "online", 00:27:30.959 "raid_level": "raid1", 00:27:30.959 "superblock": true, 00:27:30.959 "num_base_bdevs": 2, 00:27:30.959 "num_base_bdevs_discovered": 2, 00:27:30.959 "num_base_bdevs_operational": 2, 00:27:30.959 "base_bdevs_list": [ 00:27:30.959 { 00:27:30.959 "name": "pt1", 00:27:30.959 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:30.959 "is_configured": true, 00:27:30.959 "data_offset": 256, 00:27:30.959 "data_size": 7936 00:27:30.959 }, 00:27:30.959 { 00:27:30.959 "name": "pt2", 00:27:30.959 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:30.959 "is_configured": true, 00:27:30.959 "data_offset": 256, 00:27:30.959 "data_size": 7936 00:27:30.959 } 00:27:30.959 ] 00:27:30.959 } 00:27:30.959 } 00:27:30.959 }' 00:27:30.959 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:30.959 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:30.959 pt2' 00:27:30.959 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:30.959 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:30.959 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:31.217 "name": "pt1", 00:27:31.217 "aliases": [ 00:27:31.217 "95daa245-7fdf-0950-86e7-225a3818d77b" 00:27:31.217 ], 00:27:31.217 "product_name": "passthru", 00:27:31.217 "block_size": 4096, 00:27:31.217 "num_blocks": 8192, 00:27:31.217 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:31.217 "md_size": 32, 00:27:31.217 "md_interleave": false, 00:27:31.217 "dif_type": 0, 00:27:31.217 "assigned_rate_limits": { 00:27:31.217 "rw_ios_per_sec": 0, 00:27:31.217 "rw_mbytes_per_sec": 0, 00:27:31.217 "r_mbytes_per_sec": 0, 00:27:31.217 "w_mbytes_per_sec": 0 00:27:31.217 }, 00:27:31.217 "claimed": true, 00:27:31.217 "claim_type": "exclusive_write", 00:27:31.217 "zoned": false, 00:27:31.217 "supported_io_types": { 00:27:31.217 "read": true, 00:27:31.217 "write": true, 00:27:31.217 "unmap": true, 00:27:31.217 "write_zeroes": true, 00:27:31.217 "flush": true, 00:27:31.217 "reset": true, 00:27:31.217 "compare": false, 00:27:31.217 "compare_and_write": false, 00:27:31.217 "abort": true, 00:27:31.217 "nvme_admin": false, 00:27:31.217 "nvme_io": false 00:27:31.217 }, 00:27:31.217 "memory_domains": [ 00:27:31.217 { 00:27:31.217 "dma_device_id": "system", 00:27:31.217 "dma_device_type": 1 00:27:31.217 }, 00:27:31.217 { 00:27:31.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.217 "dma_device_type": 2 00:27:31.217 } 00:27:31.217 ], 00:27:31.217 "driver_specific": { 00:27:31.217 "passthru": { 00:27:31.217 "name": "pt1", 00:27:31.217 "base_bdev_name": "malloc1" 00:27:31.217 } 00:27:31.217 } 00:27:31.217 }' 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:31.217 07:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:31.476 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:31.476 "name": "pt2", 00:27:31.476 "aliases": [ 00:27:31.476 "bb076605-5a10-785b-8693-f436e27dfb4c" 00:27:31.476 ], 00:27:31.476 "product_name": "passthru", 00:27:31.476 "block_size": 4096, 00:27:31.476 "num_blocks": 8192, 00:27:31.476 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:31.476 "md_size": 32, 00:27:31.476 "md_interleave": false, 00:27:31.476 "dif_type": 0, 00:27:31.476 "assigned_rate_limits": { 00:27:31.476 "rw_ios_per_sec": 0, 00:27:31.476 "rw_mbytes_per_sec": 0, 00:27:31.476 "r_mbytes_per_sec": 0, 00:27:31.476 "w_mbytes_per_sec": 0 00:27:31.476 }, 00:27:31.476 "claimed": true, 00:27:31.476 "claim_type": "exclusive_write", 00:27:31.476 "zoned": false, 00:27:31.476 "supported_io_types": { 00:27:31.476 "read": true, 00:27:31.476 "write": true, 00:27:31.476 "unmap": true, 00:27:31.476 "write_zeroes": true, 00:27:31.476 "flush": true, 00:27:31.476 "reset": true, 00:27:31.476 "compare": false, 00:27:31.476 "compare_and_write": false, 00:27:31.476 "abort": true, 00:27:31.476 "nvme_admin": false, 00:27:31.476 "nvme_io": false 00:27:31.476 }, 00:27:31.476 "memory_domains": [ 00:27:31.476 { 00:27:31.476 "dma_device_id": "system", 00:27:31.476 "dma_device_type": 1 00:27:31.476 }, 00:27:31.476 { 00:27:31.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.476 "dma_device_type": 2 00:27:31.476 } 00:27:31.476 ], 00:27:31.476 "driver_specific": { 00:27:31.476 "passthru": { 00:27:31.476 "name": "pt2", 00:27:31.476 "base_bdev_name": "malloc2" 00:27:31.476 } 00:27:31.476 } 00:27:31.476 }' 00:27:31.476 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:31.476 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:31.476 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:31.476 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:31.476 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:31.734 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:31.991 [2024-05-16 07:40:25.361334] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:31.992 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8deb1b19-1357-11ef-8e8f-9dd684e56d79 00:27:31.992 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 8deb1b19-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:31.992 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:32.249 [2024-05-16 07:40:25.593249] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:32.249 [2024-05-16 07:40:25.593280] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:32.249 [2024-05-16 07:40:25.593304] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:32.249 [2024-05-16 07:40:25.593319] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:32.249 [2024-05-16 07:40:25.593324] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a462f00 name raid_bdev1, state offline 00:27:32.249 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.249 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:32.507 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:32.507 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:32.507 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:32.507 07:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:32.765 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:32.765 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:33.024 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:33.024 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:33.282 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:33.541 [2024-05-16 07:40:26.905240] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:33.541 [2024-05-16 07:40:26.905738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:33.541 [2024-05-16 07:40:26.905755] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:33.541 [2024-05-16 07:40:26.905792] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:33.541 [2024-05-16 07:40:26.905802] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.541 [2024-05-16 07:40:26.905806] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a462c80 name raid_bdev1, state configuring 00:27:33.541 request: 00:27:33.541 { 00:27:33.541 "name": "raid_bdev1", 00:27:33.541 "raid_level": "raid1", 00:27:33.541 "base_bdevs": [ 00:27:33.541 "malloc1", 00:27:33.541 "malloc2" 00:27:33.541 ], 00:27:33.541 "superblock": false, 00:27:33.541 "method": "bdev_raid_create", 00:27:33.541 "req_id": 1 00:27:33.541 } 00:27:33.541 Got JSON-RPC error response 00:27:33.541 response: 00:27:33.541 { 00:27:33.541 "code": -17, 00:27:33.541 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:33.541 } 00:27:33.541 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:27:33.541 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:33.541 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:33.541 07:40:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:33.541 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.541 07:40:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:33.800 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:33.800 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:33.800 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:34.060 [2024-05-16 07:40:27.393233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:34.060 [2024-05-16 07:40:27.393286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.060 [2024-05-16 07:40:27.393313] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a462780 00:27:34.060 [2024-05-16 07:40:27.393321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.060 [2024-05-16 07:40:27.393785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.060 [2024-05-16 07:40:27.393808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:34.060 [2024-05-16 07:40:27.393828] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:34.060 [2024-05-16 07:40:27.393839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:34.060 pt1 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.060 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.319 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:34.319 "name": "raid_bdev1", 00:27:34.319 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:34.319 "strip_size_kb": 0, 00:27:34.319 "state": "configuring", 00:27:34.319 "raid_level": "raid1", 00:27:34.319 "superblock": true, 00:27:34.319 "num_base_bdevs": 2, 00:27:34.319 "num_base_bdevs_discovered": 1, 00:27:34.319 "num_base_bdevs_operational": 2, 00:27:34.319 "base_bdevs_list": [ 00:27:34.319 { 00:27:34.319 "name": "pt1", 00:27:34.319 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:34.319 "is_configured": true, 00:27:34.319 "data_offset": 256, 00:27:34.319 "data_size": 7936 00:27:34.319 }, 00:27:34.319 { 00:27:34.319 "name": null, 00:27:34.319 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:34.319 "is_configured": false, 00:27:34.319 "data_offset": 256, 00:27:34.319 "data_size": 7936 00:27:34.319 } 00:27:34.319 ] 00:27:34.319 }' 00:27:34.319 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:34.319 07:40:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:34.578 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:34.578 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:34.578 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:34.578 07:40:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:34.836 [2024-05-16 07:40:28.193249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:34.836 [2024-05-16 07:40:28.193313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.836 [2024-05-16 07:40:28.193341] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a462f00 00:27:34.836 [2024-05-16 07:40:28.193349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.836 [2024-05-16 07:40:28.193416] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.836 [2024-05-16 07:40:28.193425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:34.836 [2024-05-16 07:40:28.193446] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:34.836 [2024-05-16 07:40:28.193454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:34.836 [2024-05-16 07:40:28.193469] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a463180 00:27:34.836 [2024-05-16 07:40:28.193473] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:34.836 [2024-05-16 07:40:28.193490] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a4c5e20 00:27:34.836 [2024-05-16 07:40:28.193509] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a463180 00:27:34.836 [2024-05-16 07:40:28.193512] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a463180 00:27:34.836 [2024-05-16 07:40:28.193525] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.836 pt2 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.836 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.097 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:35.097 "name": "raid_bdev1", 00:27:35.097 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:35.097 "strip_size_kb": 0, 00:27:35.097 "state": "online", 00:27:35.097 "raid_level": "raid1", 00:27:35.097 "superblock": true, 00:27:35.097 "num_base_bdevs": 2, 00:27:35.097 "num_base_bdevs_discovered": 2, 00:27:35.097 "num_base_bdevs_operational": 2, 00:27:35.097 "base_bdevs_list": [ 00:27:35.097 { 00:27:35.097 "name": "pt1", 00:27:35.097 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:35.097 "is_configured": true, 00:27:35.097 "data_offset": 256, 00:27:35.097 "data_size": 7936 00:27:35.097 }, 00:27:35.097 { 00:27:35.097 "name": "pt2", 00:27:35.097 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:35.097 "is_configured": true, 00:27:35.097 "data_offset": 256, 00:27:35.097 "data_size": 7936 00:27:35.097 } 00:27:35.097 ] 00:27:35.097 }' 00:27:35.097 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:35.097 07:40:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # local name 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:35.359 07:40:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:35.617 [2024-05-16 07:40:29.013261] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:35.617 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:35.617 "name": "raid_bdev1", 00:27:35.617 "aliases": [ 00:27:35.617 "8deb1b19-1357-11ef-8e8f-9dd684e56d79" 00:27:35.617 ], 00:27:35.617 "product_name": "Raid Volume", 00:27:35.617 "block_size": 4096, 00:27:35.617 "num_blocks": 7936, 00:27:35.617 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:35.617 "md_size": 32, 00:27:35.617 "md_interleave": false, 00:27:35.617 "dif_type": 0, 00:27:35.617 "assigned_rate_limits": { 00:27:35.617 "rw_ios_per_sec": 0, 00:27:35.617 "rw_mbytes_per_sec": 0, 00:27:35.617 "r_mbytes_per_sec": 0, 00:27:35.617 "w_mbytes_per_sec": 0 00:27:35.617 }, 00:27:35.617 "claimed": false, 00:27:35.617 "zoned": false, 00:27:35.617 "supported_io_types": { 00:27:35.617 "read": true, 00:27:35.617 "write": true, 00:27:35.617 "unmap": false, 00:27:35.617 "write_zeroes": true, 00:27:35.617 "flush": false, 00:27:35.617 "reset": true, 00:27:35.617 "compare": false, 00:27:35.617 "compare_and_write": false, 00:27:35.617 "abort": false, 00:27:35.617 "nvme_admin": false, 00:27:35.617 "nvme_io": false 00:27:35.617 }, 00:27:35.617 "memory_domains": [ 00:27:35.617 { 00:27:35.617 "dma_device_id": "system", 00:27:35.617 "dma_device_type": 1 00:27:35.617 }, 00:27:35.617 { 00:27:35.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.618 "dma_device_type": 2 00:27:35.618 }, 00:27:35.618 { 00:27:35.618 "dma_device_id": "system", 00:27:35.618 "dma_device_type": 1 00:27:35.618 }, 00:27:35.618 { 00:27:35.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.618 "dma_device_type": 2 00:27:35.618 } 00:27:35.618 ], 00:27:35.618 "driver_specific": { 00:27:35.618 "raid": { 00:27:35.618 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:35.618 "strip_size_kb": 0, 00:27:35.618 "state": "online", 00:27:35.618 "raid_level": "raid1", 00:27:35.618 "superblock": true, 00:27:35.618 "num_base_bdevs": 2, 00:27:35.618 "num_base_bdevs_discovered": 2, 00:27:35.618 "num_base_bdevs_operational": 2, 00:27:35.618 "base_bdevs_list": [ 00:27:35.618 { 00:27:35.618 "name": "pt1", 00:27:35.618 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:35.618 "is_configured": true, 00:27:35.618 "data_offset": 256, 00:27:35.618 "data_size": 7936 00:27:35.618 }, 00:27:35.618 { 00:27:35.618 "name": "pt2", 00:27:35.618 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:35.618 "is_configured": true, 00:27:35.618 "data_offset": 256, 00:27:35.618 "data_size": 7936 00:27:35.618 } 00:27:35.618 ] 00:27:35.618 } 00:27:35.618 } 00:27:35.618 }' 00:27:35.618 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:35.618 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:35.618 pt2' 00:27:35.618 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:35.618 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:35.618 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:35.876 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:35.876 "name": "pt1", 00:27:35.876 "aliases": [ 00:27:35.876 "95daa245-7fdf-0950-86e7-225a3818d77b" 00:27:35.876 ], 00:27:35.876 "product_name": "passthru", 00:27:35.876 "block_size": 4096, 00:27:35.876 "num_blocks": 8192, 00:27:35.876 "uuid": "95daa245-7fdf-0950-86e7-225a3818d77b", 00:27:35.876 "md_size": 32, 00:27:35.876 "md_interleave": false, 00:27:35.876 "dif_type": 0, 00:27:35.876 "assigned_rate_limits": { 00:27:35.876 "rw_ios_per_sec": 0, 00:27:35.876 "rw_mbytes_per_sec": 0, 00:27:35.876 "r_mbytes_per_sec": 0, 00:27:35.876 "w_mbytes_per_sec": 0 00:27:35.876 }, 00:27:35.876 "claimed": true, 00:27:35.876 "claim_type": "exclusive_write", 00:27:35.876 "zoned": false, 00:27:35.876 "supported_io_types": { 00:27:35.876 "read": true, 00:27:35.876 "write": true, 00:27:35.876 "unmap": true, 00:27:35.876 "write_zeroes": true, 00:27:35.876 "flush": true, 00:27:35.876 "reset": true, 00:27:35.876 "compare": false, 00:27:35.876 "compare_and_write": false, 00:27:35.876 "abort": true, 00:27:35.876 "nvme_admin": false, 00:27:35.876 "nvme_io": false 00:27:35.876 }, 00:27:35.876 "memory_domains": [ 00:27:35.876 { 00:27:35.876 "dma_device_id": "system", 00:27:35.876 "dma_device_type": 1 00:27:35.877 }, 00:27:35.877 { 00:27:35.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.877 "dma_device_type": 2 00:27:35.877 } 00:27:35.877 ], 00:27:35.877 "driver_specific": { 00:27:35.877 "passthru": { 00:27:35.877 "name": "pt1", 00:27:35.877 "base_bdev_name": "malloc1" 00:27:35.877 } 00:27:35.877 } 00:27:35.877 }' 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:35.877 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:36.137 "name": "pt2", 00:27:36.137 "aliases": [ 00:27:36.137 "bb076605-5a10-785b-8693-f436e27dfb4c" 00:27:36.137 ], 00:27:36.137 "product_name": "passthru", 00:27:36.137 "block_size": 4096, 00:27:36.137 "num_blocks": 8192, 00:27:36.137 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:36.137 "md_size": 32, 00:27:36.137 "md_interleave": false, 00:27:36.137 "dif_type": 0, 00:27:36.137 "assigned_rate_limits": { 00:27:36.137 "rw_ios_per_sec": 0, 00:27:36.137 "rw_mbytes_per_sec": 0, 00:27:36.137 "r_mbytes_per_sec": 0, 00:27:36.137 "w_mbytes_per_sec": 0 00:27:36.137 }, 00:27:36.137 "claimed": true, 00:27:36.137 "claim_type": "exclusive_write", 00:27:36.137 "zoned": false, 00:27:36.137 "supported_io_types": { 00:27:36.137 "read": true, 00:27:36.137 "write": true, 00:27:36.137 "unmap": true, 00:27:36.137 "write_zeroes": true, 00:27:36.137 "flush": true, 00:27:36.137 "reset": true, 00:27:36.137 "compare": false, 00:27:36.137 "compare_and_write": false, 00:27:36.137 "abort": true, 00:27:36.137 "nvme_admin": false, 00:27:36.137 "nvme_io": false 00:27:36.137 }, 00:27:36.137 "memory_domains": [ 00:27:36.137 { 00:27:36.137 "dma_device_id": "system", 00:27:36.137 "dma_device_type": 1 00:27:36.137 }, 00:27:36.137 { 00:27:36.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.137 "dma_device_type": 2 00:27:36.137 } 00:27:36.137 ], 00:27:36.137 "driver_specific": { 00:27:36.137 "passthru": { 00:27:36.137 "name": "pt2", 00:27:36.137 "base_bdev_name": "malloc2" 00:27:36.137 } 00:27:36.137 } 00:27:36.137 }' 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 4096 == 4096 ]] 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ false == false ]] 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:36.137 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:36.396 [2024-05-16 07:40:29.913298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.396 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 8deb1b19-1357-11ef-8e8f-9dd684e56d79 '!=' 8deb1b19-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:36.396 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:36.396 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:36.396 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@215 -- # return 0 00:27:36.396 07:40:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:36.654 [2024-05-16 07:40:30.197254] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.914 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.173 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:37.173 "name": "raid_bdev1", 00:27:37.173 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:37.173 "strip_size_kb": 0, 00:27:37.173 "state": "online", 00:27:37.173 "raid_level": "raid1", 00:27:37.173 "superblock": true, 00:27:37.173 "num_base_bdevs": 2, 00:27:37.173 "num_base_bdevs_discovered": 1, 00:27:37.173 "num_base_bdevs_operational": 1, 00:27:37.173 "base_bdevs_list": [ 00:27:37.173 { 00:27:37.173 "name": null, 00:27:37.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.173 "is_configured": false, 00:27:37.173 "data_offset": 256, 00:27:37.173 "data_size": 7936 00:27:37.173 }, 00:27:37.173 { 00:27:37.173 "name": "pt2", 00:27:37.173 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:37.173 "is_configured": true, 00:27:37.173 "data_offset": 256, 00:27:37.173 "data_size": 7936 00:27:37.173 } 00:27:37.173 ] 00:27:37.173 }' 00:27:37.173 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:37.173 07:40:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:37.432 07:40:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:37.692 [2024-05-16 07:40:31.001251] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:37.692 [2024-05-16 07:40:31.001274] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:37.692 [2024-05-16 07:40:31.001290] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:37.692 [2024-05-16 07:40:31.001299] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:37.692 [2024-05-16 07:40:31.001304] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a463180 name raid_bdev1, state offline 00:27:37.692 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.692 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:37.950 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:37.950 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:37.950 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:37.950 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:37.950 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:38.210 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:38.210 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:38.210 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:38.210 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:38.210 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:27:38.210 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:38.210 [2024-05-16 07:40:31.725259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:38.210 [2024-05-16 07:40:31.725313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.210 [2024-05-16 07:40:31.725355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a462f00 00:27:38.210 [2024-05-16 07:40:31.725363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.210 [2024-05-16 07:40:31.725842] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.210 [2024-05-16 07:40:31.725871] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:38.210 [2024-05-16 07:40:31.725892] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:38.210 [2024-05-16 07:40:31.725902] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:38.210 [2024-05-16 07:40:31.725914] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a463180 00:27:38.210 [2024-05-16 07:40:31.725918] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:38.210 [2024-05-16 07:40:31.725936] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a4c5e20 00:27:38.211 [2024-05-16 07:40:31.725957] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a463180 00:27:38.211 [2024-05-16 07:40:31.725961] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a463180 00:27:38.211 [2024-05-16 07:40:31.725973] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.211 pt2 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.211 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.470 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:38.470 "name": "raid_bdev1", 00:27:38.470 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:38.470 "strip_size_kb": 0, 00:27:38.470 "state": "online", 00:27:38.470 "raid_level": "raid1", 00:27:38.470 "superblock": true, 00:27:38.470 "num_base_bdevs": 2, 00:27:38.470 "num_base_bdevs_discovered": 1, 00:27:38.470 "num_base_bdevs_operational": 1, 00:27:38.470 "base_bdevs_list": [ 00:27:38.470 { 00:27:38.470 "name": null, 00:27:38.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.470 "is_configured": false, 00:27:38.470 "data_offset": 256, 00:27:38.470 "data_size": 7936 00:27:38.470 }, 00:27:38.470 { 00:27:38.470 "name": "pt2", 00:27:38.470 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:38.470 "is_configured": true, 00:27:38.470 "data_offset": 256, 00:27:38.470 "data_size": 7936 00:27:38.470 } 00:27:38.470 ] 00:27:38.470 }' 00:27:38.470 07:40:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:38.470 07:40:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:38.745 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:39.004 [2024-05-16 07:40:32.453254] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:39.004 [2024-05-16 07:40:32.453280] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:39.004 [2024-05-16 07:40:32.453296] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:39.004 [2024-05-16 07:40:32.453305] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:39.004 [2024-05-16 07:40:32.453310] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a463180 name raid_bdev1, state offline 00:27:39.004 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:39.004 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.262 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:39.262 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:39.262 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:39.263 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:39.522 [2024-05-16 07:40:32.973287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:39.522 [2024-05-16 07:40:32.973342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.522 [2024-05-16 07:40:32.973368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a462c80 00:27:39.522 [2024-05-16 07:40:32.973376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.522 [2024-05-16 07:40:32.973849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.522 [2024-05-16 07:40:32.973875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:39.522 [2024-05-16 07:40:32.973895] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:39.522 [2024-05-16 07:40:32.973905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:39.522 [2024-05-16 07:40:32.973921] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:39.522 [2024-05-16 07:40:32.973925] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:39.522 [2024-05-16 07:40:32.973931] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a462780 name raid_bdev1, state configuring 00:27:39.522 [2024-05-16 07:40:32.973938] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:39.522 [2024-05-16 07:40:32.973949] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a462780 00:27:39.522 [2024-05-16 07:40:32.973953] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:39.522 [2024-05-16 07:40:32.973972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a4c5e20 00:27:39.522 [2024-05-16 07:40:32.973992] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a462780 00:27:39.522 [2024-05-16 07:40:32.973995] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a462780 00:27:39.522 [2024-05-16 07:40:32.974007] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.522 pt1 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.522 07:40:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.782 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:39.782 "name": "raid_bdev1", 00:27:39.782 "uuid": "8deb1b19-1357-11ef-8e8f-9dd684e56d79", 00:27:39.782 "strip_size_kb": 0, 00:27:39.782 "state": "online", 00:27:39.782 "raid_level": "raid1", 00:27:39.782 "superblock": true, 00:27:39.782 "num_base_bdevs": 2, 00:27:39.782 "num_base_bdevs_discovered": 1, 00:27:39.782 "num_base_bdevs_operational": 1, 00:27:39.782 "base_bdevs_list": [ 00:27:39.783 { 00:27:39.783 "name": null, 00:27:39.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.783 "is_configured": false, 00:27:39.783 "data_offset": 256, 00:27:39.783 "data_size": 7936 00:27:39.783 }, 00:27:39.783 { 00:27:39.783 "name": "pt2", 00:27:39.783 "uuid": "bb076605-5a10-785b-8693-f436e27dfb4c", 00:27:39.783 "is_configured": true, 00:27:39.783 "data_offset": 256, 00:27:39.783 "data_size": 7936 00:27:39.783 } 00:27:39.783 ] 00:27:39.783 }' 00:27:39.783 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:39.783 07:40:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.041 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:40.041 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:40.300 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:40.300 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:40.300 07:40:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:40.557 [2024-05-16 07:40:34.089352] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 8deb1b19-1357-11ef-8e8f-9dd684e56d79 '!=' 8deb1b19-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 64892 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 64892 ']' 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 64892 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps -c -o command 64892 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # tail -1 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:27:40.816 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:27:40.817 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64892' 00:27:40.817 killing process with pid 64892 00:27:40.817 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 64892 00:27:40.817 [2024-05-16 07:40:34.132036] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:40.817 [2024-05-16 07:40:34.132068] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:40.817 [2024-05-16 07:40:34.132082] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:40.817 [2024-05-16 07:40:34.132087] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a462780 name raid_bdev1, state offline 00:27:40.817 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 64892 00:27:40.817 [2024-05-16 07:40:34.141900] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:40.817 07:40:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:27:40.817 00:27:40.817 real 0m12.963s 00:27:40.817 user 0m22.921s 00:27:40.817 sys 0m2.243s 00:27:40.817 ************************************ 00:27:40.817 END TEST raid_superblock_test_md_separate 00:27:40.817 ************************************ 00:27:40.817 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:40.817 07:40:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.817 07:40:34 bdev_raid -- bdev/bdev_raid.sh@841 -- # '[' '' = true ']' 00:27:40.817 07:40:34 bdev_raid -- bdev/bdev_raid.sh@845 -- # base_malloc_params='-m 32 -i' 00:27:40.817 07:40:34 bdev_raid -- bdev/bdev_raid.sh@846 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:27:40.817 07:40:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:27:40.817 07:40:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:40.817 07:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:40.817 ************************************ 00:27:40.817 START TEST raid_state_function_test_sb_md_interleaved 00:27:40.817 ************************************ 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local raid_level=raid1 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local num_base_bdevs=2 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local superblock=true 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local raid_bdev 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i = 1 )) 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev1 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # echo BaseBdev2 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i++ )) 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # (( i <= num_base_bdevs )) 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local base_bdevs 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local raid_bdev_name=Existed_Raid 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local strip_size_create_arg 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # local superblock_create_arg 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # '[' raid1 '!=' raid1 ']' 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # strip_size=0 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # '[' true = true ']' 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@239 -- # superblock_create_arg=-s 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # raid_pid=65283 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # echo 'Process raid pid: 65283' 00:27:40.817 Process raid pid: 65283 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@247 -- # waitforlisten 65283 /var/tmp/spdk-raid.sock 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65283 ']' 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.817 07:40:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:41.076 [2024-05-16 07:40:34.373814] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:41.076 [2024-05-16 07:40:34.374080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:27:41.642 EAL: TSC is not safe to use in SMP mode 00:27:41.642 EAL: TSC is not invariant 00:27:41.642 [2024-05-16 07:40:34.896334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.642 [2024-05-16 07:40:34.997632] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:41.642 [2024-05-16 07:40:35.000310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.642 [2024-05-16 07:40:35.001251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:41.642 [2024-05-16 07:40:35.001268] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:41.926 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:41.926 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:27:41.926 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:42.185 [2024-05-16 07:40:35.702102] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:42.185 [2024-05-16 07:40:35.702159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:42.185 [2024-05-16 07:40:35.702165] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:42.185 [2024-05-16 07:40:35.702183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.185 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.443 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:42.443 "name": "Existed_Raid", 00:27:42.443 "uuid": "9523fd1b-1357-11ef-8e8f-9dd684e56d79", 00:27:42.443 "strip_size_kb": 0, 00:27:42.443 "state": "configuring", 00:27:42.443 "raid_level": "raid1", 00:27:42.443 "superblock": true, 00:27:42.443 "num_base_bdevs": 2, 00:27:42.443 "num_base_bdevs_discovered": 0, 00:27:42.443 "num_base_bdevs_operational": 2, 00:27:42.443 "base_bdevs_list": [ 00:27:42.443 { 00:27:42.443 "name": "BaseBdev1", 00:27:42.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.443 "is_configured": false, 00:27:42.443 "data_offset": 0, 00:27:42.443 "data_size": 0 00:27:42.443 }, 00:27:42.443 { 00:27:42.443 "name": "BaseBdev2", 00:27:42.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.443 "is_configured": false, 00:27:42.443 "data_offset": 0, 00:27:42.443 "data_size": 0 00:27:42.443 } 00:27:42.443 ] 00:27:42.443 }' 00:27:42.443 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:42.444 07:40:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:43.011 07:40:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:43.011 [2024-05-16 07:40:36.522085] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:43.011 [2024-05-16 07:40:36.522118] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c670500 name Existed_Raid, state configuring 00:27:43.011 07:40:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:43.269 [2024-05-16 07:40:36.754075] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:43.269 [2024-05-16 07:40:36.754127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:43.269 [2024-05-16 07:40:36.754132] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:43.269 [2024-05-16 07:40:36.754140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:43.269 07:40:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:27:43.526 [2024-05-16 07:40:37.023020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:43.526 BaseBdev1 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # waitforbdev BaseBdev1 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:43.526 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:43.783 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:44.073 [ 00:27:44.073 { 00:27:44.073 "name": "BaseBdev1", 00:27:44.073 "aliases": [ 00:27:44.073 "95ed66e8-1357-11ef-8e8f-9dd684e56d79" 00:27:44.073 ], 00:27:44.073 "product_name": "Malloc disk", 00:27:44.073 "block_size": 4128, 00:27:44.073 "num_blocks": 8192, 00:27:44.073 "uuid": "95ed66e8-1357-11ef-8e8f-9dd684e56d79", 00:27:44.073 "md_size": 32, 00:27:44.073 "md_interleave": true, 00:27:44.073 "dif_type": 0, 00:27:44.073 "assigned_rate_limits": { 00:27:44.073 "rw_ios_per_sec": 0, 00:27:44.073 "rw_mbytes_per_sec": 0, 00:27:44.073 "r_mbytes_per_sec": 0, 00:27:44.073 "w_mbytes_per_sec": 0 00:27:44.073 }, 00:27:44.073 "claimed": true, 00:27:44.073 "claim_type": "exclusive_write", 00:27:44.073 "zoned": false, 00:27:44.073 "supported_io_types": { 00:27:44.073 "read": true, 00:27:44.073 "write": true, 00:27:44.073 "unmap": true, 00:27:44.073 "write_zeroes": true, 00:27:44.073 "flush": true, 00:27:44.073 "reset": true, 00:27:44.073 "compare": false, 00:27:44.073 "compare_and_write": false, 00:27:44.073 "abort": true, 00:27:44.073 "nvme_admin": false, 00:27:44.073 "nvme_io": false 00:27:44.073 }, 00:27:44.073 "memory_domains": [ 00:27:44.073 { 00:27:44.073 "dma_device_id": "system", 00:27:44.073 "dma_device_type": 1 00:27:44.073 }, 00:27:44.073 { 00:27:44.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.073 "dma_device_type": 2 00:27:44.073 } 00:27:44.073 ], 00:27:44.073 "driver_specific": {} 00:27:44.073 } 00:27:44.073 ] 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.073 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.331 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:44.331 "name": "Existed_Raid", 00:27:44.331 "uuid": "95c481d8-1357-11ef-8e8f-9dd684e56d79", 00:27:44.331 "strip_size_kb": 0, 00:27:44.331 "state": "configuring", 00:27:44.331 "raid_level": "raid1", 00:27:44.331 "superblock": true, 00:27:44.331 "num_base_bdevs": 2, 00:27:44.331 "num_base_bdevs_discovered": 1, 00:27:44.331 "num_base_bdevs_operational": 2, 00:27:44.331 "base_bdevs_list": [ 00:27:44.331 { 00:27:44.331 "name": "BaseBdev1", 00:27:44.331 "uuid": "95ed66e8-1357-11ef-8e8f-9dd684e56d79", 00:27:44.331 "is_configured": true, 00:27:44.331 "data_offset": 256, 00:27:44.331 "data_size": 7936 00:27:44.331 }, 00:27:44.331 { 00:27:44.331 "name": "BaseBdev2", 00:27:44.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.331 "is_configured": false, 00:27:44.331 "data_offset": 0, 00:27:44.331 "data_size": 0 00:27:44.331 } 00:27:44.331 ] 00:27:44.331 }' 00:27:44.331 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:44.331 07:40:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:44.949 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:45.231 [2024-05-16 07:40:38.490066] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:45.231 [2024-05-16 07:40:38.490100] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c670500 name Existed_Raid, state configuring 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:45.231 [2024-05-16 07:40:38.710089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:45.231 [2024-05-16 07:40:38.710850] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:45.231 [2024-05-16 07:40:38.710890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i = 1 )) 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.231 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.489 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:45.489 "name": "Existed_Raid", 00:27:45.489 "uuid": "96eef861-1357-11ef-8e8f-9dd684e56d79", 00:27:45.489 "strip_size_kb": 0, 00:27:45.489 "state": "configuring", 00:27:45.489 "raid_level": "raid1", 00:27:45.489 "superblock": true, 00:27:45.489 "num_base_bdevs": 2, 00:27:45.489 "num_base_bdevs_discovered": 1, 00:27:45.489 "num_base_bdevs_operational": 2, 00:27:45.489 "base_bdevs_list": [ 00:27:45.489 { 00:27:45.489 "name": "BaseBdev1", 00:27:45.489 "uuid": "95ed66e8-1357-11ef-8e8f-9dd684e56d79", 00:27:45.489 "is_configured": true, 00:27:45.489 "data_offset": 256, 00:27:45.489 "data_size": 7936 00:27:45.489 }, 00:27:45.489 { 00:27:45.489 "name": "BaseBdev2", 00:27:45.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.489 "is_configured": false, 00:27:45.489 "data_offset": 0, 00:27:45.489 "data_size": 0 00:27:45.489 } 00:27:45.489 ] 00:27:45.489 }' 00:27:45.489 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:45.489 07:40:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:45.747 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:27:46.006 [2024-05-16 07:40:39.466125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:46.006 [2024-05-16 07:40:39.466177] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c670a00 00:27:46.006 [2024-05-16 07:40:39.466183] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:46.006 [2024-05-16 07:40:39.466201] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c6d3e20 00:27:46.006 [2024-05-16 07:40:39.466214] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c670a00 00:27:46.006 [2024-05-16 07:40:39.466218] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c670a00 00:27:46.006 [2024-05-16 07:40:39.466227] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.006 BaseBdev2 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@269 -- # waitforbdev BaseBdev2 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:27:46.006 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:46.264 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:46.522 [ 00:27:46.522 { 00:27:46.522 "name": "BaseBdev2", 00:27:46.522 "aliases": [ 00:27:46.522 "97625387-1357-11ef-8e8f-9dd684e56d79" 00:27:46.522 ], 00:27:46.522 "product_name": "Malloc disk", 00:27:46.522 "block_size": 4128, 00:27:46.522 "num_blocks": 8192, 00:27:46.522 "uuid": "97625387-1357-11ef-8e8f-9dd684e56d79", 00:27:46.522 "md_size": 32, 00:27:46.522 "md_interleave": true, 00:27:46.522 "dif_type": 0, 00:27:46.522 "assigned_rate_limits": { 00:27:46.522 "rw_ios_per_sec": 0, 00:27:46.522 "rw_mbytes_per_sec": 0, 00:27:46.522 "r_mbytes_per_sec": 0, 00:27:46.522 "w_mbytes_per_sec": 0 00:27:46.522 }, 00:27:46.522 "claimed": true, 00:27:46.522 "claim_type": "exclusive_write", 00:27:46.522 "zoned": false, 00:27:46.522 "supported_io_types": { 00:27:46.522 "read": true, 00:27:46.522 "write": true, 00:27:46.522 "unmap": true, 00:27:46.522 "write_zeroes": true, 00:27:46.522 "flush": true, 00:27:46.523 "reset": true, 00:27:46.523 "compare": false, 00:27:46.523 "compare_and_write": false, 00:27:46.523 "abort": true, 00:27:46.523 "nvme_admin": false, 00:27:46.523 "nvme_io": false 00:27:46.523 }, 00:27:46.523 "memory_domains": [ 00:27:46.523 { 00:27:46.523 "dma_device_id": "system", 00:27:46.523 "dma_device_type": 1 00:27:46.523 }, 00:27:46.523 { 00:27:46.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.523 "dma_device_type": 2 00:27:46.523 } 00:27:46.523 ], 00:27:46.523 "driver_specific": {} 00:27:46.523 } 00:27:46.523 ] 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i++ )) 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # (( i < num_base_bdevs )) 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.523 07:40:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.781 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:46.781 "name": "Existed_Raid", 00:27:46.781 "uuid": "96eef861-1357-11ef-8e8f-9dd684e56d79", 00:27:46.781 "strip_size_kb": 0, 00:27:46.781 "state": "online", 00:27:46.781 "raid_level": "raid1", 00:27:46.781 "superblock": true, 00:27:46.781 "num_base_bdevs": 2, 00:27:46.781 "num_base_bdevs_discovered": 2, 00:27:46.781 "num_base_bdevs_operational": 2, 00:27:46.781 "base_bdevs_list": [ 00:27:46.781 { 00:27:46.781 "name": "BaseBdev1", 00:27:46.781 "uuid": "95ed66e8-1357-11ef-8e8f-9dd684e56d79", 00:27:46.781 "is_configured": true, 00:27:46.781 "data_offset": 256, 00:27:46.781 "data_size": 7936 00:27:46.781 }, 00:27:46.781 { 00:27:46.781 "name": "BaseBdev2", 00:27:46.781 "uuid": "97625387-1357-11ef-8e8f-9dd684e56d79", 00:27:46.781 "is_configured": true, 00:27:46.781 "data_offset": 256, 00:27:46.781 "data_size": 7936 00:27:46.781 } 00:27:46.781 ] 00:27:46.781 }' 00:27:46.781 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:46.781 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # verify_raid_bdev_properties Existed_Raid 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=Existed_Raid 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:47.039 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:47.297 [2024-05-16 07:40:40.794108] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:47.297 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:47.297 "name": "Existed_Raid", 00:27:47.297 "aliases": [ 00:27:47.297 "96eef861-1357-11ef-8e8f-9dd684e56d79" 00:27:47.297 ], 00:27:47.297 "product_name": "Raid Volume", 00:27:47.297 "block_size": 4128, 00:27:47.297 "num_blocks": 7936, 00:27:47.297 "uuid": "96eef861-1357-11ef-8e8f-9dd684e56d79", 00:27:47.297 "md_size": 32, 00:27:47.297 "md_interleave": true, 00:27:47.297 "dif_type": 0, 00:27:47.297 "assigned_rate_limits": { 00:27:47.297 "rw_ios_per_sec": 0, 00:27:47.297 "rw_mbytes_per_sec": 0, 00:27:47.297 "r_mbytes_per_sec": 0, 00:27:47.297 "w_mbytes_per_sec": 0 00:27:47.297 }, 00:27:47.297 "claimed": false, 00:27:47.297 "zoned": false, 00:27:47.297 "supported_io_types": { 00:27:47.297 "read": true, 00:27:47.297 "write": true, 00:27:47.297 "unmap": false, 00:27:47.297 "write_zeroes": true, 00:27:47.297 "flush": false, 00:27:47.297 "reset": true, 00:27:47.297 "compare": false, 00:27:47.297 "compare_and_write": false, 00:27:47.297 "abort": false, 00:27:47.297 "nvme_admin": false, 00:27:47.297 "nvme_io": false 00:27:47.297 }, 00:27:47.297 "memory_domains": [ 00:27:47.297 { 00:27:47.297 "dma_device_id": "system", 00:27:47.297 "dma_device_type": 1 00:27:47.297 }, 00:27:47.297 { 00:27:47.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.297 "dma_device_type": 2 00:27:47.297 }, 00:27:47.297 { 00:27:47.297 "dma_device_id": "system", 00:27:47.297 "dma_device_type": 1 00:27:47.297 }, 00:27:47.297 { 00:27:47.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.297 "dma_device_type": 2 00:27:47.297 } 00:27:47.297 ], 00:27:47.297 "driver_specific": { 00:27:47.297 "raid": { 00:27:47.297 "uuid": "96eef861-1357-11ef-8e8f-9dd684e56d79", 00:27:47.297 "strip_size_kb": 0, 00:27:47.297 "state": "online", 00:27:47.297 "raid_level": "raid1", 00:27:47.297 "superblock": true, 00:27:47.297 "num_base_bdevs": 2, 00:27:47.297 "num_base_bdevs_discovered": 2, 00:27:47.297 "num_base_bdevs_operational": 2, 00:27:47.297 "base_bdevs_list": [ 00:27:47.297 { 00:27:47.297 "name": "BaseBdev1", 00:27:47.297 "uuid": "95ed66e8-1357-11ef-8e8f-9dd684e56d79", 00:27:47.297 "is_configured": true, 00:27:47.297 "data_offset": 256, 00:27:47.297 "data_size": 7936 00:27:47.297 }, 00:27:47.297 { 00:27:47.297 "name": "BaseBdev2", 00:27:47.297 "uuid": "97625387-1357-11ef-8e8f-9dd684e56d79", 00:27:47.297 "is_configured": true, 00:27:47.297 "data_offset": 256, 00:27:47.297 "data_size": 7936 00:27:47.297 } 00:27:47.297 ] 00:27:47.297 } 00:27:47.297 } 00:27:47.297 }' 00:27:47.297 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:47.297 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='BaseBdev1 00:27:47.297 BaseBdev2' 00:27:47.297 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:47.297 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:47.297 07:40:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:47.555 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:47.555 "name": "BaseBdev1", 00:27:47.555 "aliases": [ 00:27:47.555 "95ed66e8-1357-11ef-8e8f-9dd684e56d79" 00:27:47.555 ], 00:27:47.555 "product_name": "Malloc disk", 00:27:47.555 "block_size": 4128, 00:27:47.555 "num_blocks": 8192, 00:27:47.555 "uuid": "95ed66e8-1357-11ef-8e8f-9dd684e56d79", 00:27:47.555 "md_size": 32, 00:27:47.555 "md_interleave": true, 00:27:47.555 "dif_type": 0, 00:27:47.555 "assigned_rate_limits": { 00:27:47.555 "rw_ios_per_sec": 0, 00:27:47.555 "rw_mbytes_per_sec": 0, 00:27:47.555 "r_mbytes_per_sec": 0, 00:27:47.555 "w_mbytes_per_sec": 0 00:27:47.555 }, 00:27:47.555 "claimed": true, 00:27:47.555 "claim_type": "exclusive_write", 00:27:47.555 "zoned": false, 00:27:47.555 "supported_io_types": { 00:27:47.555 "read": true, 00:27:47.555 "write": true, 00:27:47.555 "unmap": true, 00:27:47.555 "write_zeroes": true, 00:27:47.555 "flush": true, 00:27:47.555 "reset": true, 00:27:47.555 "compare": false, 00:27:47.555 "compare_and_write": false, 00:27:47.555 "abort": true, 00:27:47.555 "nvme_admin": false, 00:27:47.555 "nvme_io": false 00:27:47.555 }, 00:27:47.555 "memory_domains": [ 00:27:47.555 { 00:27:47.555 "dma_device_id": "system", 00:27:47.555 "dma_device_type": 1 00:27:47.555 }, 00:27:47.555 { 00:27:47.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.555 "dma_device_type": 2 00:27:47.555 } 00:27:47.555 ], 00:27:47.555 "driver_specific": {} 00:27:47.555 }' 00:27:47.555 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:47.555 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:47.819 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:48.121 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:48.121 "name": "BaseBdev2", 00:27:48.121 "aliases": [ 00:27:48.121 "97625387-1357-11ef-8e8f-9dd684e56d79" 00:27:48.121 ], 00:27:48.121 "product_name": "Malloc disk", 00:27:48.121 "block_size": 4128, 00:27:48.121 "num_blocks": 8192, 00:27:48.121 "uuid": "97625387-1357-11ef-8e8f-9dd684e56d79", 00:27:48.121 "md_size": 32, 00:27:48.121 "md_interleave": true, 00:27:48.121 "dif_type": 0, 00:27:48.121 "assigned_rate_limits": { 00:27:48.121 "rw_ios_per_sec": 0, 00:27:48.121 "rw_mbytes_per_sec": 0, 00:27:48.121 "r_mbytes_per_sec": 0, 00:27:48.121 "w_mbytes_per_sec": 0 00:27:48.121 }, 00:27:48.121 "claimed": true, 00:27:48.121 "claim_type": "exclusive_write", 00:27:48.121 "zoned": false, 00:27:48.121 "supported_io_types": { 00:27:48.121 "read": true, 00:27:48.121 "write": true, 00:27:48.121 "unmap": true, 00:27:48.121 "write_zeroes": true, 00:27:48.121 "flush": true, 00:27:48.121 "reset": true, 00:27:48.121 "compare": false, 00:27:48.121 "compare_and_write": false, 00:27:48.121 "abort": true, 00:27:48.121 "nvme_admin": false, 00:27:48.121 "nvme_io": false 00:27:48.121 }, 00:27:48.121 "memory_domains": [ 00:27:48.121 { 00:27:48.121 "dma_device_id": "system", 00:27:48.121 "dma_device_type": 1 00:27:48.121 }, 00:27:48.121 { 00:27:48.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.121 "dma_device_type": 2 00:27:48.121 } 00:27:48.121 ], 00:27:48.121 "driver_specific": {} 00:27:48.121 }' 00:27:48.121 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:48.122 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:48.380 [2024-05-16 07:40:41.738109] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # local expected_state 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@277 -- # has_redundancy raid1 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@280 -- # expected_state=online 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@282 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:48.380 07:40:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.638 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:48.638 "name": "Existed_Raid", 00:27:48.638 "uuid": "96eef861-1357-11ef-8e8f-9dd684e56d79", 00:27:48.638 "strip_size_kb": 0, 00:27:48.638 "state": "online", 00:27:48.638 "raid_level": "raid1", 00:27:48.638 "superblock": true, 00:27:48.638 "num_base_bdevs": 2, 00:27:48.638 "num_base_bdevs_discovered": 1, 00:27:48.638 "num_base_bdevs_operational": 1, 00:27:48.638 "base_bdevs_list": [ 00:27:48.638 { 00:27:48.638 "name": null, 00:27:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.638 "is_configured": false, 00:27:48.638 "data_offset": 256, 00:27:48.638 "data_size": 7936 00:27:48.638 }, 00:27:48.638 { 00:27:48.638 "name": "BaseBdev2", 00:27:48.638 "uuid": "97625387-1357-11ef-8e8f-9dd684e56d79", 00:27:48.638 "is_configured": true, 00:27:48.638 "data_offset": 256, 00:27:48.638 "data_size": 7936 00:27:48.638 } 00:27:48.638 ] 00:27:48.638 }' 00:27:48.638 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:48.638 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:48.896 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:48.896 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:48.896 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.896 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # jq -r '.[0]["name"]' 00:27:49.154 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # raid_bdev=Existed_Raid 00:27:49.154 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@288 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:49.154 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:49.412 [2024-05-16 07:40:42.819061] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:49.412 [2024-05-16 07:40:42.819105] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:49.412 [2024-05-16 07:40:42.823903] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:49.412 [2024-05-16 07:40:42.823916] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:49.412 [2024-05-16 07:40:42.823921] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c670a00 name Existed_Raid, state offline 00:27:49.412 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:49.412 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:49.412 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # jq -r '.[0]["name"] | select(.)' 00:27:49.412 07:40:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # raid_bdev= 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@295 -- # '[' -n '' ']' 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@300 -- # '[' 2 -gt 2 ']' 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@342 -- # killprocess 65283 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65283 ']' 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65283 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65283 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65283' 00:27:49.671 killing process with pid 65283 00:27:49.671 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 65283 00:27:49.672 [2024-05-16 07:40:43.101533] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:49.672 [2024-05-16 07:40:43.101581] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:49.672 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 65283 00:27:49.931 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@344 -- # return 0 00:27:49.931 00:27:49.931 real 0m8.918s 00:27:49.931 user 0m15.408s 00:27:49.931 sys 0m1.693s 00:27:49.931 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:49.931 07:40:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:49.931 ************************************ 00:27:49.931 END TEST raid_state_function_test_sb_md_interleaved 00:27:49.931 ************************************ 00:27:49.931 07:40:43 bdev_raid -- bdev/bdev_raid.sh@847 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:27:49.931 07:40:43 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:27:49.931 07:40:43 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:49.931 07:40:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:49.931 ************************************ 00:27:49.931 START TEST raid_superblock_test_md_interleaved 00:27:49.931 ************************************ 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=65553 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 65553 /var/tmp/spdk-raid.sock 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65553 ']' 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:49.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:49.931 07:40:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:49.931 [2024-05-16 07:40:43.335433] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:49.931 [2024-05-16 07:40:43.335667] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:27:50.864 EAL: TSC is not safe to use in SMP mode 00:27:50.864 EAL: TSC is not invariant 00:27:50.864 [2024-05-16 07:40:44.132953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.864 [2024-05-16 07:40:44.231917] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:27:50.864 [2024-05-16 07:40:44.234613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.864 [2024-05-16 07:40:44.235522] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:50.864 [2024-05-16 07:40:44.235538] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:51.121 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:27:51.378 malloc1 00:27:51.378 07:40:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:51.634 [2024-05-16 07:40:44.979899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:51.634 [2024-05-16 07:40:44.979973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.634 [2024-05-16 07:40:44.980562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5f1780 00:27:51.634 [2024-05-16 07:40:44.980587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.634 [2024-05-16 07:40:44.981272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.634 [2024-05-16 07:40:44.981307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:51.634 pt1 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:51.634 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:27:51.891 malloc2 00:27:51.891 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:52.149 [2024-05-16 07:40:45.523879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:52.149 [2024-05-16 07:40:45.523933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.149 [2024-05-16 07:40:45.523960] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5f1c80 00:27:52.149 [2024-05-16 07:40:45.523968] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.149 [2024-05-16 07:40:45.524377] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.149 [2024-05-16 07:40:45.524402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:52.149 pt2 00:27:52.149 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:52.149 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:52.149 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:52.405 [2024-05-16 07:40:45.795901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:52.405 [2024-05-16 07:40:45.796293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:52.405 [2024-05-16 07:40:45.796346] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5f1f00 00:27:52.405 [2024-05-16 07:40:45.796352] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:52.405 [2024-05-16 07:40:45.796388] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b654e20 00:27:52.405 [2024-05-16 07:40:45.796402] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5f1f00 00:27:52.405 [2024-05-16 07:40:45.796406] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b5f1f00 00:27:52.405 [2024-05-16 07:40:45.796416] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.405 07:40:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.663 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:52.663 "name": "raid_bdev1", 00:27:52.663 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:52.663 "strip_size_kb": 0, 00:27:52.663 "state": "online", 00:27:52.663 "raid_level": "raid1", 00:27:52.663 "superblock": true, 00:27:52.663 "num_base_bdevs": 2, 00:27:52.664 "num_base_bdevs_discovered": 2, 00:27:52.664 "num_base_bdevs_operational": 2, 00:27:52.664 "base_bdevs_list": [ 00:27:52.664 { 00:27:52.664 "name": "pt1", 00:27:52.664 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:52.664 "is_configured": true, 00:27:52.664 "data_offset": 256, 00:27:52.664 "data_size": 7936 00:27:52.664 }, 00:27:52.664 { 00:27:52.664 "name": "pt2", 00:27:52.664 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:52.664 "is_configured": true, 00:27:52.664 "data_offset": 256, 00:27:52.664 "data_size": 7936 00:27:52.664 } 00:27:52.664 ] 00:27:52.664 }' 00:27:52.664 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:52.664 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:52.921 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:53.255 [2024-05-16 07:40:46.707942] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:53.255 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:53.255 "name": "raid_bdev1", 00:27:53.255 "aliases": [ 00:27:53.255 "9b282e0d-1357-11ef-8e8f-9dd684e56d79" 00:27:53.255 ], 00:27:53.255 "product_name": "Raid Volume", 00:27:53.255 "block_size": 4128, 00:27:53.255 "num_blocks": 7936, 00:27:53.255 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:53.255 "md_size": 32, 00:27:53.255 "md_interleave": true, 00:27:53.255 "dif_type": 0, 00:27:53.255 "assigned_rate_limits": { 00:27:53.255 "rw_ios_per_sec": 0, 00:27:53.255 "rw_mbytes_per_sec": 0, 00:27:53.255 "r_mbytes_per_sec": 0, 00:27:53.255 "w_mbytes_per_sec": 0 00:27:53.255 }, 00:27:53.255 "claimed": false, 00:27:53.255 "zoned": false, 00:27:53.255 "supported_io_types": { 00:27:53.255 "read": true, 00:27:53.255 "write": true, 00:27:53.255 "unmap": false, 00:27:53.255 "write_zeroes": true, 00:27:53.255 "flush": false, 00:27:53.255 "reset": true, 00:27:53.255 "compare": false, 00:27:53.255 "compare_and_write": false, 00:27:53.255 "abort": false, 00:27:53.255 "nvme_admin": false, 00:27:53.255 "nvme_io": false 00:27:53.255 }, 00:27:53.255 "memory_domains": [ 00:27:53.255 { 00:27:53.255 "dma_device_id": "system", 00:27:53.255 "dma_device_type": 1 00:27:53.255 }, 00:27:53.255 { 00:27:53.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.255 "dma_device_type": 2 00:27:53.255 }, 00:27:53.255 { 00:27:53.255 "dma_device_id": "system", 00:27:53.255 "dma_device_type": 1 00:27:53.255 }, 00:27:53.255 { 00:27:53.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.255 "dma_device_type": 2 00:27:53.255 } 00:27:53.255 ], 00:27:53.255 "driver_specific": { 00:27:53.255 "raid": { 00:27:53.255 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:53.255 "strip_size_kb": 0, 00:27:53.255 "state": "online", 00:27:53.255 "raid_level": "raid1", 00:27:53.255 "superblock": true, 00:27:53.255 "num_base_bdevs": 2, 00:27:53.255 "num_base_bdevs_discovered": 2, 00:27:53.255 "num_base_bdevs_operational": 2, 00:27:53.256 "base_bdevs_list": [ 00:27:53.256 { 00:27:53.256 "name": "pt1", 00:27:53.256 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:53.256 "is_configured": true, 00:27:53.256 "data_offset": 256, 00:27:53.256 "data_size": 7936 00:27:53.256 }, 00:27:53.256 { 00:27:53.256 "name": "pt2", 00:27:53.256 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:53.256 "is_configured": true, 00:27:53.256 "data_offset": 256, 00:27:53.256 "data_size": 7936 00:27:53.256 } 00:27:53.256 ] 00:27:53.256 } 00:27:53.256 } 00:27:53.256 }' 00:27:53.256 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:53.256 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:53.256 pt2' 00:27:53.256 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:53.256 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:53.256 07:40:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:53.513 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:53.513 "name": "pt1", 00:27:53.513 "aliases": [ 00:27:53.513 "051be346-eec5-e651-b9d8-dae0083fff2d" 00:27:53.513 ], 00:27:53.513 "product_name": "passthru", 00:27:53.513 "block_size": 4128, 00:27:53.513 "num_blocks": 8192, 00:27:53.513 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:53.514 "md_size": 32, 00:27:53.514 "md_interleave": true, 00:27:53.514 "dif_type": 0, 00:27:53.514 "assigned_rate_limits": { 00:27:53.514 "rw_ios_per_sec": 0, 00:27:53.514 "rw_mbytes_per_sec": 0, 00:27:53.514 "r_mbytes_per_sec": 0, 00:27:53.514 "w_mbytes_per_sec": 0 00:27:53.514 }, 00:27:53.514 "claimed": true, 00:27:53.514 "claim_type": "exclusive_write", 00:27:53.514 "zoned": false, 00:27:53.514 "supported_io_types": { 00:27:53.514 "read": true, 00:27:53.514 "write": true, 00:27:53.514 "unmap": true, 00:27:53.514 "write_zeroes": true, 00:27:53.514 "flush": true, 00:27:53.514 "reset": true, 00:27:53.514 "compare": false, 00:27:53.514 "compare_and_write": false, 00:27:53.514 "abort": true, 00:27:53.514 "nvme_admin": false, 00:27:53.514 "nvme_io": false 00:27:53.514 }, 00:27:53.514 "memory_domains": [ 00:27:53.514 { 00:27:53.514 "dma_device_id": "system", 00:27:53.514 "dma_device_type": 1 00:27:53.514 }, 00:27:53.514 { 00:27:53.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.514 "dma_device_type": 2 00:27:53.514 } 00:27:53.514 ], 00:27:53.514 "driver_specific": { 00:27:53.514 "passthru": { 00:27:53.514 "name": "pt1", 00:27:53.514 "base_bdev_name": "malloc1" 00:27:53.514 } 00:27:53.514 } 00:27:53.514 }' 00:27:53.514 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:53.772 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:54.031 "name": "pt2", 00:27:54.031 "aliases": [ 00:27:54.031 "8432db01-3868-715d-b44e-985caadac3f3" 00:27:54.031 ], 00:27:54.031 "product_name": "passthru", 00:27:54.031 "block_size": 4128, 00:27:54.031 "num_blocks": 8192, 00:27:54.031 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:54.031 "md_size": 32, 00:27:54.031 "md_interleave": true, 00:27:54.031 "dif_type": 0, 00:27:54.031 "assigned_rate_limits": { 00:27:54.031 "rw_ios_per_sec": 0, 00:27:54.031 "rw_mbytes_per_sec": 0, 00:27:54.031 "r_mbytes_per_sec": 0, 00:27:54.031 "w_mbytes_per_sec": 0 00:27:54.031 }, 00:27:54.031 "claimed": true, 00:27:54.031 "claim_type": "exclusive_write", 00:27:54.031 "zoned": false, 00:27:54.031 "supported_io_types": { 00:27:54.031 "read": true, 00:27:54.031 "write": true, 00:27:54.031 "unmap": true, 00:27:54.031 "write_zeroes": true, 00:27:54.031 "flush": true, 00:27:54.031 "reset": true, 00:27:54.031 "compare": false, 00:27:54.031 "compare_and_write": false, 00:27:54.031 "abort": true, 00:27:54.031 "nvme_admin": false, 00:27:54.031 "nvme_io": false 00:27:54.031 }, 00:27:54.031 "memory_domains": [ 00:27:54.031 { 00:27:54.031 "dma_device_id": "system", 00:27:54.031 "dma_device_type": 1 00:27:54.031 }, 00:27:54.031 { 00:27:54.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.031 "dma_device_type": 2 00:27:54.031 } 00:27:54.031 ], 00:27:54.031 "driver_specific": { 00:27:54.031 "passthru": { 00:27:54.031 "name": "pt2", 00:27:54.031 "base_bdev_name": "malloc2" 00:27:54.031 } 00:27:54.031 } 00:27:54.031 }' 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:54.031 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:54.288 [2024-05-16 07:40:47.747939] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:54.288 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9b282e0d-1357-11ef-8e8f-9dd684e56d79 00:27:54.288 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 9b282e0d-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:54.288 07:40:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:54.545 [2024-05-16 07:40:48.035924] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:54.545 [2024-05-16 07:40:48.035946] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.545 [2024-05-16 07:40:48.035963] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.545 [2024-05-16 07:40:48.035977] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:54.545 [2024-05-16 07:40:48.035981] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f1f00 name raid_bdev1, state offline 00:27:54.545 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.545 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:55.110 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:55.110 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:55.110 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.110 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:55.110 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:55.110 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:55.367 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:55.367 07:40:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:55.625 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:55.883 [2024-05-16 07:40:49.395935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:55.883 [2024-05-16 07:40:49.396400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:55.883 [2024-05-16 07:40:49.396423] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:55.883 [2024-05-16 07:40:49.396459] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:55.883 [2024-05-16 07:40:49.396468] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:55.883 [2024-05-16 07:40:49.396473] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f1c80 name raid_bdev1, state configuring 00:27:55.883 request: 00:27:55.883 { 00:27:55.883 "name": "raid_bdev1", 00:27:55.883 "raid_level": "raid1", 00:27:55.883 "base_bdevs": [ 00:27:55.883 "malloc1", 00:27:55.883 "malloc2" 00:27:55.883 ], 00:27:55.883 "superblock": false, 00:27:55.883 "method": "bdev_raid_create", 00:27:55.883 "req_id": 1 00:27:55.883 } 00:27:55.883 Got JSON-RPC error response 00:27:55.883 response: 00:27:55.883 { 00:27:55.883 "code": -17, 00:27:55.883 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:55.883 } 00:27:55.883 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:27:55.883 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:55.883 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:55.883 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:55.883 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.883 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:56.448 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:56.448 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:56.449 [2024-05-16 07:40:49.951925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:56.449 [2024-05-16 07:40:49.951969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.449 [2024-05-16 07:40:49.951994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5f1780 00:27:56.449 [2024-05-16 07:40:49.952002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.449 [2024-05-16 07:40:49.952458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.449 [2024-05-16 07:40:49.952489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:56.449 [2024-05-16 07:40:49.952504] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:56.449 [2024-05-16 07:40:49.952515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:56.449 pt1 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.449 07:40:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.013 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:57.013 "name": "raid_bdev1", 00:27:57.013 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:57.013 "strip_size_kb": 0, 00:27:57.013 "state": "configuring", 00:27:57.013 "raid_level": "raid1", 00:27:57.013 "superblock": true, 00:27:57.013 "num_base_bdevs": 2, 00:27:57.013 "num_base_bdevs_discovered": 1, 00:27:57.013 "num_base_bdevs_operational": 2, 00:27:57.013 "base_bdevs_list": [ 00:27:57.013 { 00:27:57.013 "name": "pt1", 00:27:57.013 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:57.013 "is_configured": true, 00:27:57.013 "data_offset": 256, 00:27:57.013 "data_size": 7936 00:27:57.013 }, 00:27:57.013 { 00:27:57.013 "name": null, 00:27:57.013 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:57.013 "is_configured": false, 00:27:57.013 "data_offset": 256, 00:27:57.013 "data_size": 7936 00:27:57.013 } 00:27:57.013 ] 00:27:57.013 }' 00:27:57.013 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:57.013 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.271 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:57.271 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:57.271 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:57.271 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:57.529 [2024-05-16 07:40:50.959925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:57.529 [2024-05-16 07:40:50.959974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.529 [2024-05-16 07:40:50.959999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5f1f00 00:27:57.529 [2024-05-16 07:40:50.960006] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.529 [2024-05-16 07:40:50.960050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.529 [2024-05-16 07:40:50.960058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:57.529 [2024-05-16 07:40:50.960073] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:57.529 [2024-05-16 07:40:50.960080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:57.529 [2024-05-16 07:40:50.960098] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5f2180 00:27:57.529 [2024-05-16 07:40:50.960102] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:57.529 [2024-05-16 07:40:50.960117] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b654e20 00:27:57.529 [2024-05-16 07:40:50.960129] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5f2180 00:27:57.529 [2024-05-16 07:40:50.960132] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b5f2180 00:27:57.529 [2024-05-16 07:40:50.960141] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.529 pt2 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.529 07:40:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.786 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:57.786 "name": "raid_bdev1", 00:27:57.786 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:57.786 "strip_size_kb": 0, 00:27:57.786 "state": "online", 00:27:57.786 "raid_level": "raid1", 00:27:57.786 "superblock": true, 00:27:57.786 "num_base_bdevs": 2, 00:27:57.786 "num_base_bdevs_discovered": 2, 00:27:57.786 "num_base_bdevs_operational": 2, 00:27:57.786 "base_bdevs_list": [ 00:27:57.786 { 00:27:57.786 "name": "pt1", 00:27:57.786 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:57.786 "is_configured": true, 00:27:57.786 "data_offset": 256, 00:27:57.786 "data_size": 7936 00:27:57.786 }, 00:27:57.786 { 00:27:57.786 "name": "pt2", 00:27:57.786 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:57.786 "is_configured": true, 00:27:57.786 "data_offset": 256, 00:27:57.786 "data_size": 7936 00:27:57.786 } 00:27:57.786 ] 00:27:57.786 }' 00:27:57.786 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:57.786 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_name=raid_bdev1 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local raid_bdev_info 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_info 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local base_bdev_names 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # local name 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq '.[]' 00:27:58.353 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:58.353 [2024-05-16 07:40:51.895957] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.611 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # raid_bdev_info='{ 00:27:58.611 "name": "raid_bdev1", 00:27:58.611 "aliases": [ 00:27:58.611 "9b282e0d-1357-11ef-8e8f-9dd684e56d79" 00:27:58.611 ], 00:27:58.611 "product_name": "Raid Volume", 00:27:58.611 "block_size": 4128, 00:27:58.611 "num_blocks": 7936, 00:27:58.611 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:58.611 "md_size": 32, 00:27:58.611 "md_interleave": true, 00:27:58.611 "dif_type": 0, 00:27:58.611 "assigned_rate_limits": { 00:27:58.611 "rw_ios_per_sec": 0, 00:27:58.611 "rw_mbytes_per_sec": 0, 00:27:58.611 "r_mbytes_per_sec": 0, 00:27:58.611 "w_mbytes_per_sec": 0 00:27:58.611 }, 00:27:58.611 "claimed": false, 00:27:58.611 "zoned": false, 00:27:58.611 "supported_io_types": { 00:27:58.611 "read": true, 00:27:58.611 "write": true, 00:27:58.611 "unmap": false, 00:27:58.611 "write_zeroes": true, 00:27:58.611 "flush": false, 00:27:58.611 "reset": true, 00:27:58.611 "compare": false, 00:27:58.611 "compare_and_write": false, 00:27:58.611 "abort": false, 00:27:58.611 "nvme_admin": false, 00:27:58.611 "nvme_io": false 00:27:58.611 }, 00:27:58.611 "memory_domains": [ 00:27:58.611 { 00:27:58.611 "dma_device_id": "system", 00:27:58.611 "dma_device_type": 1 00:27:58.611 }, 00:27:58.612 { 00:27:58.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.612 "dma_device_type": 2 00:27:58.612 }, 00:27:58.612 { 00:27:58.612 "dma_device_id": "system", 00:27:58.612 "dma_device_type": 1 00:27:58.612 }, 00:27:58.612 { 00:27:58.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.612 "dma_device_type": 2 00:27:58.612 } 00:27:58.612 ], 00:27:58.612 "driver_specific": { 00:27:58.612 "raid": { 00:27:58.612 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:58.612 "strip_size_kb": 0, 00:27:58.612 "state": "online", 00:27:58.612 "raid_level": "raid1", 00:27:58.612 "superblock": true, 00:27:58.612 "num_base_bdevs": 2, 00:27:58.612 "num_base_bdevs_discovered": 2, 00:27:58.612 "num_base_bdevs_operational": 2, 00:27:58.612 "base_bdevs_list": [ 00:27:58.612 { 00:27:58.612 "name": "pt1", 00:27:58.612 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:58.612 "is_configured": true, 00:27:58.612 "data_offset": 256, 00:27:58.612 "data_size": 7936 00:27:58.612 }, 00:27:58.612 { 00:27:58.612 "name": "pt2", 00:27:58.612 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:58.612 "is_configured": true, 00:27:58.612 "data_offset": 256, 00:27:58.612 "data_size": 7936 00:27:58.612 } 00:27:58.612 ] 00:27:58.612 } 00:27:58.612 } 00:27:58.612 }' 00:27:58.612 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:58.612 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@202 -- # base_bdev_names='pt1 00:27:58.612 pt2' 00:27:58.612 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:58.612 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:58.612 07:40:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:58.612 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:58.612 "name": "pt1", 00:27:58.612 "aliases": [ 00:27:58.612 "051be346-eec5-e651-b9d8-dae0083fff2d" 00:27:58.612 ], 00:27:58.612 "product_name": "passthru", 00:27:58.612 "block_size": 4128, 00:27:58.612 "num_blocks": 8192, 00:27:58.612 "uuid": "051be346-eec5-e651-b9d8-dae0083fff2d", 00:27:58.612 "md_size": 32, 00:27:58.612 "md_interleave": true, 00:27:58.612 "dif_type": 0, 00:27:58.612 "assigned_rate_limits": { 00:27:58.612 "rw_ios_per_sec": 0, 00:27:58.612 "rw_mbytes_per_sec": 0, 00:27:58.612 "r_mbytes_per_sec": 0, 00:27:58.612 "w_mbytes_per_sec": 0 00:27:58.612 }, 00:27:58.612 "claimed": true, 00:27:58.612 "claim_type": "exclusive_write", 00:27:58.612 "zoned": false, 00:27:58.612 "supported_io_types": { 00:27:58.612 "read": true, 00:27:58.612 "write": true, 00:27:58.612 "unmap": true, 00:27:58.612 "write_zeroes": true, 00:27:58.612 "flush": true, 00:27:58.612 "reset": true, 00:27:58.612 "compare": false, 00:27:58.612 "compare_and_write": false, 00:27:58.612 "abort": true, 00:27:58.612 "nvme_admin": false, 00:27:58.612 "nvme_io": false 00:27:58.612 }, 00:27:58.612 "memory_domains": [ 00:27:58.612 { 00:27:58.612 "dma_device_id": "system", 00:27:58.612 "dma_device_type": 1 00:27:58.612 }, 00:27:58.612 { 00:27:58.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.612 "dma_device_type": 2 00:27:58.612 } 00:27:58.612 ], 00:27:58.612 "driver_specific": { 00:27:58.612 "passthru": { 00:27:58.612 "name": "pt1", 00:27:58.612 "base_bdev_name": "malloc1" 00:27:58.612 } 00:27:58.612 } 00:27:58.612 }' 00:27:58.612 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:58.612 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:58.612 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:27:58.612 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # for name in $base_bdev_names 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:58.871 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq '.[]' 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # base_bdev_info='{ 00:27:59.128 "name": "pt2", 00:27:59.128 "aliases": [ 00:27:59.128 "8432db01-3868-715d-b44e-985caadac3f3" 00:27:59.128 ], 00:27:59.128 "product_name": "passthru", 00:27:59.128 "block_size": 4128, 00:27:59.128 "num_blocks": 8192, 00:27:59.128 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:59.128 "md_size": 32, 00:27:59.128 "md_interleave": true, 00:27:59.128 "dif_type": 0, 00:27:59.128 "assigned_rate_limits": { 00:27:59.128 "rw_ios_per_sec": 0, 00:27:59.128 "rw_mbytes_per_sec": 0, 00:27:59.128 "r_mbytes_per_sec": 0, 00:27:59.128 "w_mbytes_per_sec": 0 00:27:59.128 }, 00:27:59.128 "claimed": true, 00:27:59.128 "claim_type": "exclusive_write", 00:27:59.128 "zoned": false, 00:27:59.128 "supported_io_types": { 00:27:59.128 "read": true, 00:27:59.128 "write": true, 00:27:59.128 "unmap": true, 00:27:59.128 "write_zeroes": true, 00:27:59.128 "flush": true, 00:27:59.128 "reset": true, 00:27:59.128 "compare": false, 00:27:59.128 "compare_and_write": false, 00:27:59.128 "abort": true, 00:27:59.128 "nvme_admin": false, 00:27:59.128 "nvme_io": false 00:27:59.128 }, 00:27:59.128 "memory_domains": [ 00:27:59.128 { 00:27:59.128 "dma_device_id": "system", 00:27:59.128 "dma_device_type": 1 00:27:59.128 }, 00:27:59.128 { 00:27:59.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.128 "dma_device_type": 2 00:27:59.128 } 00:27:59.128 ], 00:27:59.128 "driver_specific": { 00:27:59.128 "passthru": { 00:27:59.128 "name": "pt2", 00:27:59.128 "base_bdev_name": "malloc2" 00:27:59.128 } 00:27:59.128 } 00:27:59.128 }' 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .block_size 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 4128 == 4128 ]] 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_size 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ 32 == 32 ]] 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .md_interleave 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ true == true ]] 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # jq .dif_type 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@209 -- # [[ 0 == 0 ]] 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:59.128 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:59.385 [2024-05-16 07:40:52.755975] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:59.385 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 9b282e0d-1357-11ef-8e8f-9dd684e56d79 '!=' 9b282e0d-1357-11ef-8e8f-9dd684e56d79 ']' 00:27:59.385 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:59.385 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # case $1 in 00:27:59.385 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@215 -- # return 0 00:27:59.385 07:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:59.642 [2024-05-16 07:40:52.991941] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:59.642 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:59.643 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:59.643 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.643 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.901 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:59.901 "name": "raid_bdev1", 00:27:59.901 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:27:59.901 "strip_size_kb": 0, 00:27:59.901 "state": "online", 00:27:59.901 "raid_level": "raid1", 00:27:59.901 "superblock": true, 00:27:59.901 "num_base_bdevs": 2, 00:27:59.901 "num_base_bdevs_discovered": 1, 00:27:59.901 "num_base_bdevs_operational": 1, 00:27:59.901 "base_bdevs_list": [ 00:27:59.901 { 00:27:59.901 "name": null, 00:27:59.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.901 "is_configured": false, 00:27:59.901 "data_offset": 256, 00:27:59.901 "data_size": 7936 00:27:59.901 }, 00:27:59.901 { 00:27:59.901 "name": "pt2", 00:27:59.901 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:27:59.901 "is_configured": true, 00:27:59.901 "data_offset": 256, 00:27:59.901 "data_size": 7936 00:27:59.901 } 00:27:59.901 ] 00:27:59.901 }' 00:27:59.901 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:59.901 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.158 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:00.417 [2024-05-16 07:40:53.931917] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:00.417 [2024-05-16 07:40:53.931943] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:00.417 [2024-05-16 07:40:53.931959] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:00.417 [2024-05-16 07:40:53.931970] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:00.417 [2024-05-16 07:40:53.931974] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f2180 name raid_bdev1, state offline 00:28:00.417 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.417 07:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:00.675 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:00.675 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:00.675 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:00.675 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:00.675 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:00.932 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:00.932 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:00.932 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:00.932 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:00.932 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:28:00.932 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:01.190 [2024-05-16 07:40:54.691934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:01.190 [2024-05-16 07:40:54.691994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.190 [2024-05-16 07:40:54.692021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5f1f00 00:28:01.190 [2024-05-16 07:40:54.692029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.190 [2024-05-16 07:40:54.692495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.190 [2024-05-16 07:40:54.692522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:01.190 [2024-05-16 07:40:54.692540] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:01.190 [2024-05-16 07:40:54.692550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:01.190 [2024-05-16 07:40:54.692567] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5f2180 00:28:01.190 [2024-05-16 07:40:54.692570] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:01.191 [2024-05-16 07:40:54.692588] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b654e20 00:28:01.191 [2024-05-16 07:40:54.692600] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5f2180 00:28:01.191 [2024-05-16 07:40:54.692603] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b5f2180 00:28:01.191 [2024-05-16 07:40:54.692613] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:01.191 pt2 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.191 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.449 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:01.449 "name": "raid_bdev1", 00:28:01.449 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:28:01.449 "strip_size_kb": 0, 00:28:01.449 "state": "online", 00:28:01.449 "raid_level": "raid1", 00:28:01.449 "superblock": true, 00:28:01.449 "num_base_bdevs": 2, 00:28:01.449 "num_base_bdevs_discovered": 1, 00:28:01.449 "num_base_bdevs_operational": 1, 00:28:01.449 "base_bdevs_list": [ 00:28:01.449 { 00:28:01.449 "name": null, 00:28:01.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.449 "is_configured": false, 00:28:01.449 "data_offset": 256, 00:28:01.449 "data_size": 7936 00:28:01.449 }, 00:28:01.449 { 00:28:01.449 "name": "pt2", 00:28:01.449 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:28:01.449 "is_configured": true, 00:28:01.449 "data_offset": 256, 00:28:01.449 "data_size": 7936 00:28:01.449 } 00:28:01.449 ] 00:28:01.449 }' 00:28:01.449 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:01.449 07:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:02.030 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:02.030 [2024-05-16 07:40:55.515914] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:02.030 [2024-05-16 07:40:55.515942] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:02.030 [2024-05-16 07:40:55.515962] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:02.030 [2024-05-16 07:40:55.515975] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:02.030 [2024-05-16 07:40:55.515979] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f2180 name raid_bdev1, state offline 00:28:02.030 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:02.030 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.288 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:02.288 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:02.288 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:28:02.288 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:02.546 [2024-05-16 07:40:55.979936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:02.546 [2024-05-16 07:40:55.980001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.546 [2024-05-16 07:40:55.980028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5f1c80 00:28:02.546 [2024-05-16 07:40:55.980036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.546 [2024-05-16 07:40:55.980514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.547 [2024-05-16 07:40:55.980541] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:02.547 [2024-05-16 07:40:55.980560] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:02.547 [2024-05-16 07:40:55.980571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:02.547 [2024-05-16 07:40:55.980590] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:02.547 [2024-05-16 07:40:55.980594] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:02.547 [2024-05-16 07:40:55.980599] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f1780 name raid_bdev1, state configuring 00:28:02.547 [2024-05-16 07:40:55.980609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:02.547 [2024-05-16 07:40:55.980623] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5f1780 00:28:02.547 [2024-05-16 07:40:55.980626] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:02.547 [2024-05-16 07:40:55.980645] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b654e20 00:28:02.547 [2024-05-16 07:40:55.980655] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5f1780 00:28:02.547 [2024-05-16 07:40:55.980659] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b5f1780 00:28:02.547 [2024-05-16 07:40:55.980668] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:02.547 pt1 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:02.547 07:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:02.547 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.547 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.805 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:02.805 "name": "raid_bdev1", 00:28:02.805 "uuid": "9b282e0d-1357-11ef-8e8f-9dd684e56d79", 00:28:02.805 "strip_size_kb": 0, 00:28:02.805 "state": "online", 00:28:02.805 "raid_level": "raid1", 00:28:02.805 "superblock": true, 00:28:02.805 "num_base_bdevs": 2, 00:28:02.805 "num_base_bdevs_discovered": 1, 00:28:02.805 "num_base_bdevs_operational": 1, 00:28:02.805 "base_bdevs_list": [ 00:28:02.805 { 00:28:02.805 "name": null, 00:28:02.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.805 "is_configured": false, 00:28:02.805 "data_offset": 256, 00:28:02.805 "data_size": 7936 00:28:02.805 }, 00:28:02.805 { 00:28:02.805 "name": "pt2", 00:28:02.805 "uuid": "8432db01-3868-715d-b44e-985caadac3f3", 00:28:02.805 "is_configured": true, 00:28:02.805 "data_offset": 256, 00:28:02.805 "data_size": 7936 00:28:02.805 } 00:28:02.805 ] 00:28:02.805 }' 00:28:02.805 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:02.805 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.064 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:03.064 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:03.630 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:03.630 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:03.630 07:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:03.630 [2024-05-16 07:40:57.091964] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 9b282e0d-1357-11ef-8e8f-9dd684e56d79 '!=' 9b282e0d-1357-11ef-8e8f-9dd684e56d79 ']' 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 65553 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65553 ']' 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65553 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65553 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdev_svc 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdev_svc = sudo ']' 00:28:03.630 killing process with pid 65553 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65553' 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 65553 00:28:03.630 [2024-05-16 07:40:57.126023] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:03.630 [2024-05-16 07:40:57.126048] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:03.630 [2024-05-16 07:40:57.126070] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:03.630 [2024-05-16 07:40:57.126085] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5f1780 name raid_bdev1, state offline 00:28:03.630 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 65553 00:28:03.630 [2024-05-16 07:40:57.135961] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:03.888 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:28:03.888 00:28:03.888 real 0m14.000s 00:28:03.888 user 0m24.604s 00:28:03.888 sys 0m2.561s 00:28:03.888 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.888 07:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.888 ************************************ 00:28:03.888 END TEST raid_superblock_test_md_interleaved 00:28:03.888 ************************************ 00:28:03.888 07:40:57 bdev_raid -- bdev/bdev_raid.sh@848 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:03.888 07:40:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:03.889 07:40:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.889 07:40:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:03.889 ************************************ 00:28:03.889 START TEST raid_rebuild_test_sb_md_interleaved 00:28:03.889 ************************************ 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=65948 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 65948 /var/tmp/spdk-raid.sock 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 65948 ']' 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:03.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:03.889 07:40:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.889 [2024-05-16 07:40:57.383003] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:03.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:03.889 Zero copy mechanism will not be used. 00:28:03.889 [2024-05-16 07:40:57.383255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:04.455 EAL: TSC is not safe to use in SMP mode 00:28:04.455 EAL: TSC is not invariant 00:28:04.455 [2024-05-16 07:40:57.886267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.455 [2024-05-16 07:40:57.977317] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:04.455 [2024-05-16 07:40:57.979510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.455 [2024-05-16 07:40:57.980228] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:04.455 [2024-05-16 07:40:57.980240] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:05.020 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.020 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:28:05.020 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:05.020 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:05.278 BaseBdev1_malloc 00:28:05.278 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:05.535 [2024-05-16 07:40:58.959902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:05.535 [2024-05-16 07:40:58.959978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:05.535 [2024-05-16 07:40:58.960557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9ef780 00:28:05.535 [2024-05-16 07:40:58.960586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:05.535 [2024-05-16 07:40:58.961334] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:05.535 [2024-05-16 07:40:58.961365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:05.535 BaseBdev1 00:28:05.535 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:05.535 07:40:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:05.793 BaseBdev2_malloc 00:28:05.793 07:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:06.050 [2024-05-16 07:40:59.487895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:06.050 [2024-05-16 07:40:59.487965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.050 [2024-05-16 07:40:59.487994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9efc80 00:28:06.050 [2024-05-16 07:40:59.488002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.050 [2024-05-16 07:40:59.488518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.050 [2024-05-16 07:40:59.488550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:06.050 BaseBdev2 00:28:06.050 07:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:06.307 spare_malloc 00:28:06.307 07:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:06.564 spare_delay 00:28:06.564 07:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:06.820 [2024-05-16 07:41:00.187937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:06.820 [2024-05-16 07:41:00.188037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:06.820 [2024-05-16 07:41:00.188083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f0400 00:28:06.820 [2024-05-16 07:41:00.188100] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:06.820 [2024-05-16 07:41:00.188763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:06.820 [2024-05-16 07:41:00.188806] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:06.820 spare 00:28:06.820 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:07.077 [2024-05-16 07:41:00.423924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.077 [2024-05-16 07:41:00.424396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:07.077 [2024-05-16 07:41:00.424476] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a9f0680 00:28:07.077 [2024-05-16 07:41:00.424481] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:07.077 [2024-05-16 07:41:00.424514] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa52e20 00:28:07.077 [2024-05-16 07:41:00.424527] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a9f0680 00:28:07.077 [2024-05-16 07:41:00.424530] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a9f0680 00:28:07.077 [2024-05-16 07:41:00.424542] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.077 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.334 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:07.334 "name": "raid_bdev1", 00:28:07.334 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:07.334 "strip_size_kb": 0, 00:28:07.334 "state": "online", 00:28:07.334 "raid_level": "raid1", 00:28:07.334 "superblock": true, 00:28:07.334 "num_base_bdevs": 2, 00:28:07.334 "num_base_bdevs_discovered": 2, 00:28:07.334 "num_base_bdevs_operational": 2, 00:28:07.334 "base_bdevs_list": [ 00:28:07.334 { 00:28:07.334 "name": "BaseBdev1", 00:28:07.334 "uuid": "bd98abae-e68d-225d-a332-5d699f374e94", 00:28:07.334 "is_configured": true, 00:28:07.334 "data_offset": 256, 00:28:07.334 "data_size": 7936 00:28:07.334 }, 00:28:07.334 { 00:28:07.334 "name": "BaseBdev2", 00:28:07.334 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:07.334 "is_configured": true, 00:28:07.334 "data_offset": 256, 00:28:07.334 "data_size": 7936 00:28:07.334 } 00:28:07.334 ] 00:28:07.334 }' 00:28:07.334 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:07.334 07:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.593 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:07.593 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:07.864 [2024-05-16 07:41:01.243983] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:07.864 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:28:07.864 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.864 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:08.153 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:28:08.153 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:08.153 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:28:08.153 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:08.412 [2024-05-16 07:41:01.751927] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.412 07:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.670 07:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:08.670 "name": "raid_bdev1", 00:28:08.670 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:08.670 "strip_size_kb": 0, 00:28:08.670 "state": "online", 00:28:08.670 "raid_level": "raid1", 00:28:08.670 "superblock": true, 00:28:08.670 "num_base_bdevs": 2, 00:28:08.670 "num_base_bdevs_discovered": 1, 00:28:08.670 "num_base_bdevs_operational": 1, 00:28:08.670 "base_bdevs_list": [ 00:28:08.670 { 00:28:08.670 "name": null, 00:28:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.670 "is_configured": false, 00:28:08.670 "data_offset": 256, 00:28:08.670 "data_size": 7936 00:28:08.670 }, 00:28:08.670 { 00:28:08.670 "name": "BaseBdev2", 00:28:08.670 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:08.670 "is_configured": true, 00:28:08.670 "data_offset": 256, 00:28:08.670 "data_size": 7936 00:28:08.670 } 00:28:08.670 ] 00:28:08.670 }' 00:28:08.670 07:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:08.670 07:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.927 07:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:09.185 [2024-05-16 07:41:02.651932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:09.185 [2024-05-16 07:41:02.652075] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa52ec0 00:28:09.185 [2024-05-16 07:41:02.652913] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:09.185 07:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:10.560 "name": "raid_bdev1", 00:28:10.560 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:10.560 "strip_size_kb": 0, 00:28:10.560 "state": "online", 00:28:10.560 "raid_level": "raid1", 00:28:10.560 "superblock": true, 00:28:10.560 "num_base_bdevs": 2, 00:28:10.560 "num_base_bdevs_discovered": 2, 00:28:10.560 "num_base_bdevs_operational": 2, 00:28:10.560 "process": { 00:28:10.560 "type": "rebuild", 00:28:10.560 "target": "spare", 00:28:10.560 "progress": { 00:28:10.560 "blocks": 3072, 00:28:10.560 "percent": 38 00:28:10.560 } 00:28:10.560 }, 00:28:10.560 "base_bdevs_list": [ 00:28:10.560 { 00:28:10.560 "name": "spare", 00:28:10.560 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:10.560 "is_configured": true, 00:28:10.560 "data_offset": 256, 00:28:10.560 "data_size": 7936 00:28:10.560 }, 00:28:10.560 { 00:28:10.560 "name": "BaseBdev2", 00:28:10.560 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:10.560 "is_configured": true, 00:28:10.560 "data_offset": 256, 00:28:10.560 "data_size": 7936 00:28:10.560 } 00:28:10.560 ] 00:28:10.560 }' 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:10.560 07:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:10.818 [2024-05-16 07:41:04.188258] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:10.818 [2024-05-16 07:41:04.260300] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:28:10.818 [2024-05-16 07:41:04.260369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.818 [2024-05-16 07:41:04.260375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:10.818 [2024-05-16 07:41:04.260380] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.818 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.075 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:11.075 "name": "raid_bdev1", 00:28:11.075 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:11.075 "strip_size_kb": 0, 00:28:11.075 "state": "online", 00:28:11.075 "raid_level": "raid1", 00:28:11.075 "superblock": true, 00:28:11.075 "num_base_bdevs": 2, 00:28:11.075 "num_base_bdevs_discovered": 1, 00:28:11.075 "num_base_bdevs_operational": 1, 00:28:11.075 "base_bdevs_list": [ 00:28:11.075 { 00:28:11.075 "name": null, 00:28:11.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.075 "is_configured": false, 00:28:11.075 "data_offset": 256, 00:28:11.075 "data_size": 7936 00:28:11.075 }, 00:28:11.075 { 00:28:11.075 "name": "BaseBdev2", 00:28:11.075 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:11.075 "is_configured": true, 00:28:11.075 "data_offset": 256, 00:28:11.075 "data_size": 7936 00:28:11.075 } 00:28:11.075 ] 00:28:11.075 }' 00:28:11.075 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:11.075 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.640 07:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.896 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:11.896 "name": "raid_bdev1", 00:28:11.896 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:11.896 "strip_size_kb": 0, 00:28:11.896 "state": "online", 00:28:11.896 "raid_level": "raid1", 00:28:11.896 "superblock": true, 00:28:11.896 "num_base_bdevs": 2, 00:28:11.896 "num_base_bdevs_discovered": 1, 00:28:11.896 "num_base_bdevs_operational": 1, 00:28:11.896 "base_bdevs_list": [ 00:28:11.896 { 00:28:11.896 "name": null, 00:28:11.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.896 "is_configured": false, 00:28:11.896 "data_offset": 256, 00:28:11.896 "data_size": 7936 00:28:11.896 }, 00:28:11.896 { 00:28:11.896 "name": "BaseBdev2", 00:28:11.896 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:11.896 "is_configured": true, 00:28:11.896 "data_offset": 256, 00:28:11.896 "data_size": 7936 00:28:11.896 } 00:28:11.896 ] 00:28:11.896 }' 00:28:11.896 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:11.896 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:11.896 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:11.896 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:11.896 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:12.153 [2024-05-16 07:41:05.556266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:12.153 [2024-05-16 07:41:05.556402] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa52e20 00:28:12.153 [2024-05-16 07:41:05.557098] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:12.153 07:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:13.522 "name": "raid_bdev1", 00:28:13.522 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:13.522 "strip_size_kb": 0, 00:28:13.522 "state": "online", 00:28:13.522 "raid_level": "raid1", 00:28:13.522 "superblock": true, 00:28:13.522 "num_base_bdevs": 2, 00:28:13.522 "num_base_bdevs_discovered": 2, 00:28:13.522 "num_base_bdevs_operational": 2, 00:28:13.522 "process": { 00:28:13.522 "type": "rebuild", 00:28:13.522 "target": "spare", 00:28:13.522 "progress": { 00:28:13.522 "blocks": 3072, 00:28:13.522 "percent": 38 00:28:13.522 } 00:28:13.522 }, 00:28:13.522 "base_bdevs_list": [ 00:28:13.522 { 00:28:13.522 "name": "spare", 00:28:13.522 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:13.522 "is_configured": true, 00:28:13.522 "data_offset": 256, 00:28:13.522 "data_size": 7936 00:28:13.522 }, 00:28:13.522 { 00:28:13.522 "name": "BaseBdev2", 00:28:13.522 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:13.522 "is_configured": true, 00:28:13.522 "data_offset": 256, 00:28:13.522 "data_size": 7936 00:28:13.522 } 00:28:13.522 ] 00:28:13.522 }' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:13.522 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=585 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.522 07:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.780 07:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:13.780 "name": "raid_bdev1", 00:28:13.780 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:13.780 "strip_size_kb": 0, 00:28:13.780 "state": "online", 00:28:13.780 "raid_level": "raid1", 00:28:13.780 "superblock": true, 00:28:13.780 "num_base_bdevs": 2, 00:28:13.780 "num_base_bdevs_discovered": 2, 00:28:13.780 "num_base_bdevs_operational": 2, 00:28:13.780 "process": { 00:28:13.780 "type": "rebuild", 00:28:13.780 "target": "spare", 00:28:13.780 "progress": { 00:28:13.780 "blocks": 4096, 00:28:13.780 "percent": 51 00:28:13.780 } 00:28:13.780 }, 00:28:13.780 "base_bdevs_list": [ 00:28:13.780 { 00:28:13.780 "name": "spare", 00:28:13.780 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:13.780 "is_configured": true, 00:28:13.780 "data_offset": 256, 00:28:13.780 "data_size": 7936 00:28:13.780 }, 00:28:13.780 { 00:28:13.780 "name": "BaseBdev2", 00:28:13.780 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:13.780 "is_configured": true, 00:28:13.780 "data_offset": 256, 00:28:13.780 "data_size": 7936 00:28:13.780 } 00:28:13.780 ] 00:28:13.780 }' 00:28:13.780 07:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:13.780 07:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:13.780 07:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:13.780 07:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:13.780 07:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:15.154 "name": "raid_bdev1", 00:28:15.154 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:15.154 "strip_size_kb": 0, 00:28:15.154 "state": "online", 00:28:15.154 "raid_level": "raid1", 00:28:15.154 "superblock": true, 00:28:15.154 "num_base_bdevs": 2, 00:28:15.154 "num_base_bdevs_discovered": 2, 00:28:15.154 "num_base_bdevs_operational": 2, 00:28:15.154 "process": { 00:28:15.154 "type": "rebuild", 00:28:15.154 "target": "spare", 00:28:15.154 "progress": { 00:28:15.154 "blocks": 7424, 00:28:15.154 "percent": 93 00:28:15.154 } 00:28:15.154 }, 00:28:15.154 "base_bdevs_list": [ 00:28:15.154 { 00:28:15.154 "name": "spare", 00:28:15.154 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:15.154 "is_configured": true, 00:28:15.154 "data_offset": 256, 00:28:15.154 "data_size": 7936 00:28:15.154 }, 00:28:15.154 { 00:28:15.154 "name": "BaseBdev2", 00:28:15.154 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:15.154 "is_configured": true, 00:28:15.154 "data_offset": 256, 00:28:15.154 "data_size": 7936 00:28:15.154 } 00:28:15.154 ] 00:28:15.154 }' 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.154 07:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:15.154 [2024-05-16 07:41:08.670459] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:15.154 [2024-05-16 07:41:08.670501] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:15.154 [2024-05-16 07:41:08.670561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.088 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:16.655 "name": "raid_bdev1", 00:28:16.655 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:16.655 "strip_size_kb": 0, 00:28:16.655 "state": "online", 00:28:16.655 "raid_level": "raid1", 00:28:16.655 "superblock": true, 00:28:16.655 "num_base_bdevs": 2, 00:28:16.655 "num_base_bdevs_discovered": 2, 00:28:16.655 "num_base_bdevs_operational": 2, 00:28:16.655 "base_bdevs_list": [ 00:28:16.655 { 00:28:16.655 "name": "spare", 00:28:16.655 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:16.655 "is_configured": true, 00:28:16.655 "data_offset": 256, 00:28:16.655 "data_size": 7936 00:28:16.655 }, 00:28:16.655 { 00:28:16.655 "name": "BaseBdev2", 00:28:16.655 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:16.655 "is_configured": true, 00:28:16.655 "data_offset": 256, 00:28:16.655 "data_size": 7936 00:28:16.655 } 00:28:16.655 ] 00:28:16.655 }' 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.655 07:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:16.913 "name": "raid_bdev1", 00:28:16.913 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:16.913 "strip_size_kb": 0, 00:28:16.913 "state": "online", 00:28:16.913 "raid_level": "raid1", 00:28:16.913 "superblock": true, 00:28:16.913 "num_base_bdevs": 2, 00:28:16.913 "num_base_bdevs_discovered": 2, 00:28:16.913 "num_base_bdevs_operational": 2, 00:28:16.913 "base_bdevs_list": [ 00:28:16.913 { 00:28:16.913 "name": "spare", 00:28:16.913 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:16.913 "is_configured": true, 00:28:16.913 "data_offset": 256, 00:28:16.913 "data_size": 7936 00:28:16.913 }, 00:28:16.913 { 00:28:16.913 "name": "BaseBdev2", 00:28:16.913 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:16.913 "is_configured": true, 00:28:16.913 "data_offset": 256, 00:28:16.913 "data_size": 7936 00:28:16.913 } 00:28:16.913 ] 00:28:16.913 }' 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:16.913 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.914 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.172 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:17.172 "name": "raid_bdev1", 00:28:17.172 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:17.172 "strip_size_kb": 0, 00:28:17.172 "state": "online", 00:28:17.172 "raid_level": "raid1", 00:28:17.172 "superblock": true, 00:28:17.172 "num_base_bdevs": 2, 00:28:17.172 "num_base_bdevs_discovered": 2, 00:28:17.172 "num_base_bdevs_operational": 2, 00:28:17.172 "base_bdevs_list": [ 00:28:17.172 { 00:28:17.172 "name": "spare", 00:28:17.172 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:17.172 "is_configured": true, 00:28:17.172 "data_offset": 256, 00:28:17.172 "data_size": 7936 00:28:17.172 }, 00:28:17.172 { 00:28:17.172 "name": "BaseBdev2", 00:28:17.172 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:17.172 "is_configured": true, 00:28:17.172 "data_offset": 256, 00:28:17.172 "data_size": 7936 00:28:17.172 } 00:28:17.172 ] 00:28:17.172 }' 00:28:17.172 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:17.172 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:17.488 07:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:17.746 [2024-05-16 07:41:11.065288] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.746 [2024-05-16 07:41:11.065316] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.746 [2024-05-16 07:41:11.065340] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.746 [2024-05-16 07:41:11.065354] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.746 [2024-05-16 07:41:11.065359] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a9f0680 name raid_bdev1, state offline 00:28:17.746 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:28:17.746 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.005 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:18.005 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:18.005 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:18.005 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:18.263 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:18.521 [2024-05-16 07:41:11.869310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:18.521 [2024-05-16 07:41:11.869397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:18.522 [2024-05-16 07:41:11.869438] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f0400 00:28:18.522 [2024-05-16 07:41:11.869446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:18.522 [2024-05-16 07:41:11.869972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:18.522 [2024-05-16 07:41:11.869992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:18.522 [2024-05-16 07:41:11.870012] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:18.522 [2024-05-16 07:41:11.870023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:18.522 [2024-05-16 07:41:11.870046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:18.522 spare 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.522 07:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.522 [2024-05-16 07:41:11.970067] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a9f0680 00:28:18.522 [2024-05-16 07:41:11.970100] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:18.522 [2024-05-16 07:41:11.970156] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa52e20 00:28:18.522 [2024-05-16 07:41:11.970193] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a9f0680 00:28:18.522 [2024-05-16 07:41:11.970201] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a9f0680 00:28:18.522 [2024-05-16 07:41:11.970232] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.779 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:18.779 "name": "raid_bdev1", 00:28:18.779 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:18.779 "strip_size_kb": 0, 00:28:18.779 "state": "online", 00:28:18.779 "raid_level": "raid1", 00:28:18.779 "superblock": true, 00:28:18.779 "num_base_bdevs": 2, 00:28:18.779 "num_base_bdevs_discovered": 2, 00:28:18.779 "num_base_bdevs_operational": 2, 00:28:18.779 "base_bdevs_list": [ 00:28:18.779 { 00:28:18.779 "name": "spare", 00:28:18.779 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:18.779 "is_configured": true, 00:28:18.779 "data_offset": 256, 00:28:18.779 "data_size": 7936 00:28:18.779 }, 00:28:18.779 { 00:28:18.779 "name": "BaseBdev2", 00:28:18.779 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:18.779 "is_configured": true, 00:28:18.779 "data_offset": 256, 00:28:18.780 "data_size": 7936 00:28:18.780 } 00:28:18.780 ] 00:28:18.780 }' 00:28:18.780 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:18.780 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.038 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.295 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:19.296 "name": "raid_bdev1", 00:28:19.296 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:19.296 "strip_size_kb": 0, 00:28:19.296 "state": "online", 00:28:19.296 "raid_level": "raid1", 00:28:19.296 "superblock": true, 00:28:19.296 "num_base_bdevs": 2, 00:28:19.296 "num_base_bdevs_discovered": 2, 00:28:19.296 "num_base_bdevs_operational": 2, 00:28:19.296 "base_bdevs_list": [ 00:28:19.296 { 00:28:19.296 "name": "spare", 00:28:19.296 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:19.296 "is_configured": true, 00:28:19.296 "data_offset": 256, 00:28:19.296 "data_size": 7936 00:28:19.296 }, 00:28:19.296 { 00:28:19.296 "name": "BaseBdev2", 00:28:19.296 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:19.296 "is_configured": true, 00:28:19.296 "data_offset": 256, 00:28:19.296 "data_size": 7936 00:28:19.296 } 00:28:19.296 ] 00:28:19.296 }' 00:28:19.296 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:19.553 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:19.553 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:19.553 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:19.553 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:19.553 07:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.811 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.811 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:20.070 [2024-05-16 07:41:13.373310] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.070 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.336 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:20.336 "name": "raid_bdev1", 00:28:20.336 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:20.336 "strip_size_kb": 0, 00:28:20.336 "state": "online", 00:28:20.336 "raid_level": "raid1", 00:28:20.336 "superblock": true, 00:28:20.336 "num_base_bdevs": 2, 00:28:20.336 "num_base_bdevs_discovered": 1, 00:28:20.336 "num_base_bdevs_operational": 1, 00:28:20.336 "base_bdevs_list": [ 00:28:20.336 { 00:28:20.336 "name": null, 00:28:20.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.336 "is_configured": false, 00:28:20.336 "data_offset": 256, 00:28:20.336 "data_size": 7936 00:28:20.336 }, 00:28:20.336 { 00:28:20.336 "name": "BaseBdev2", 00:28:20.336 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:20.336 "is_configured": true, 00:28:20.336 "data_offset": 256, 00:28:20.336 "data_size": 7936 00:28:20.336 } 00:28:20.336 ] 00:28:20.336 }' 00:28:20.336 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:20.336 07:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.602 07:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:20.861 [2024-05-16 07:41:14.329327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.861 [2024-05-16 07:41:14.329396] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:20.861 [2024-05-16 07:41:14.329400] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:20.861 [2024-05-16 07:41:14.329435] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.861 [2024-05-16 07:41:14.329509] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa52ec0 00:28:20.861 [2024-05-16 07:41:14.329969] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:20.861 07:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # sleep 1 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:22.235 "name": "raid_bdev1", 00:28:22.235 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:22.235 "strip_size_kb": 0, 00:28:22.235 "state": "online", 00:28:22.235 "raid_level": "raid1", 00:28:22.235 "superblock": true, 00:28:22.235 "num_base_bdevs": 2, 00:28:22.235 "num_base_bdevs_discovered": 2, 00:28:22.235 "num_base_bdevs_operational": 2, 00:28:22.235 "process": { 00:28:22.235 "type": "rebuild", 00:28:22.235 "target": "spare", 00:28:22.235 "progress": { 00:28:22.235 "blocks": 3072, 00:28:22.235 "percent": 38 00:28:22.235 } 00:28:22.235 }, 00:28:22.235 "base_bdevs_list": [ 00:28:22.235 { 00:28:22.235 "name": "spare", 00:28:22.235 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:22.235 "is_configured": true, 00:28:22.235 "data_offset": 256, 00:28:22.235 "data_size": 7936 00:28:22.235 }, 00:28:22.235 { 00:28:22.235 "name": "BaseBdev2", 00:28:22.235 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:22.235 "is_configured": true, 00:28:22.235 "data_offset": 256, 00:28:22.235 "data_size": 7936 00:28:22.235 } 00:28:22.235 ] 00:28:22.235 }' 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:22.235 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.236 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:22.494 [2024-05-16 07:41:15.813444] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:22.494 [2024-05-16 07:41:15.836261] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:28:22.494 [2024-05-16 07:41:15.836306] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.494 [2024-05-16 07:41:15.836311] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:22.494 [2024-05-16 07:41:15.836314] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.494 07:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.754 07:41:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:22.754 "name": "raid_bdev1", 00:28:22.754 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:22.754 "strip_size_kb": 0, 00:28:22.754 "state": "online", 00:28:22.754 "raid_level": "raid1", 00:28:22.754 "superblock": true, 00:28:22.754 "num_base_bdevs": 2, 00:28:22.754 "num_base_bdevs_discovered": 1, 00:28:22.754 "num_base_bdevs_operational": 1, 00:28:22.754 "base_bdevs_list": [ 00:28:22.754 { 00:28:22.754 "name": null, 00:28:22.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.754 "is_configured": false, 00:28:22.754 "data_offset": 256, 00:28:22.754 "data_size": 7936 00:28:22.754 }, 00:28:22.754 { 00:28:22.754 "name": "BaseBdev2", 00:28:22.754 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:22.754 "is_configured": true, 00:28:22.754 "data_offset": 256, 00:28:22.754 "data_size": 7936 00:28:22.754 } 00:28:22.754 ] 00:28:22.754 }' 00:28:22.754 07:41:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:22.754 07:41:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:23.012 07:41:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:23.270 [2024-05-16 07:41:16.586710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:23.270 [2024-05-16 07:41:16.586766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.270 [2024-05-16 07:41:16.586791] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f0400 00:28:23.270 [2024-05-16 07:41:16.586800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.270 [2024-05-16 07:41:16.586873] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.270 [2024-05-16 07:41:16.586881] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:23.270 [2024-05-16 07:41:16.586913] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:23.270 [2024-05-16 07:41:16.586918] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:23.270 [2024-05-16 07:41:16.586921] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:23.270 [2024-05-16 07:41:16.586931] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:23.270 [2024-05-16 07:41:16.586998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa52e20 00:28:23.270 [2024-05-16 07:41:16.587443] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:23.270 spare 00:28:23.270 07:41:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # sleep 1 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.348 07:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.606 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:24.606 "name": "raid_bdev1", 00:28:24.606 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:24.606 "strip_size_kb": 0, 00:28:24.606 "state": "online", 00:28:24.606 "raid_level": "raid1", 00:28:24.606 "superblock": true, 00:28:24.606 "num_base_bdevs": 2, 00:28:24.606 "num_base_bdevs_discovered": 2, 00:28:24.606 "num_base_bdevs_operational": 2, 00:28:24.606 "process": { 00:28:24.606 "type": "rebuild", 00:28:24.606 "target": "spare", 00:28:24.606 "progress": { 00:28:24.606 "blocks": 3584, 00:28:24.606 "percent": 45 00:28:24.606 } 00:28:24.606 }, 00:28:24.606 "base_bdevs_list": [ 00:28:24.606 { 00:28:24.606 "name": "spare", 00:28:24.606 "uuid": "8121f815-28a2-7f57-a055-5129577b296d", 00:28:24.606 "is_configured": true, 00:28:24.606 "data_offset": 256, 00:28:24.606 "data_size": 7936 00:28:24.606 }, 00:28:24.606 { 00:28:24.606 "name": "BaseBdev2", 00:28:24.606 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:24.606 "is_configured": true, 00:28:24.606 "data_offset": 256, 00:28:24.606 "data_size": 7936 00:28:24.606 } 00:28:24.606 ] 00:28:24.606 }' 00:28:24.606 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:24.606 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:24.606 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:24.606 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:24.606 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:25.171 [2024-05-16 07:41:18.503224] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:25.171 [2024-05-16 07:41:18.596083] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:28:25.171 [2024-05-16 07:41:18.596172] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.171 [2024-05-16 07:41:18.596179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:25.171 [2024-05-16 07:41:18.596184] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.171 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.430 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:25.430 "name": "raid_bdev1", 00:28:25.430 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:25.430 "strip_size_kb": 0, 00:28:25.430 "state": "online", 00:28:25.430 "raid_level": "raid1", 00:28:25.430 "superblock": true, 00:28:25.430 "num_base_bdevs": 2, 00:28:25.430 "num_base_bdevs_discovered": 1, 00:28:25.430 "num_base_bdevs_operational": 1, 00:28:25.430 "base_bdevs_list": [ 00:28:25.430 { 00:28:25.430 "name": null, 00:28:25.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.430 "is_configured": false, 00:28:25.430 "data_offset": 256, 00:28:25.430 "data_size": 7936 00:28:25.430 }, 00:28:25.430 { 00:28:25.430 "name": "BaseBdev2", 00:28:25.430 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:25.430 "is_configured": true, 00:28:25.430 "data_offset": 256, 00:28:25.430 "data_size": 7936 00:28:25.430 } 00:28:25.430 ] 00:28:25.430 }' 00:28:25.430 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:25.430 07:41:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.687 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:26.252 "name": "raid_bdev1", 00:28:26.252 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:26.252 "strip_size_kb": 0, 00:28:26.252 "state": "online", 00:28:26.252 "raid_level": "raid1", 00:28:26.252 "superblock": true, 00:28:26.252 "num_base_bdevs": 2, 00:28:26.252 "num_base_bdevs_discovered": 1, 00:28:26.252 "num_base_bdevs_operational": 1, 00:28:26.252 "base_bdevs_list": [ 00:28:26.252 { 00:28:26.252 "name": null, 00:28:26.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.252 "is_configured": false, 00:28:26.252 "data_offset": 256, 00:28:26.252 "data_size": 7936 00:28:26.252 }, 00:28:26.252 { 00:28:26.252 "name": "BaseBdev2", 00:28:26.252 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:26.252 "is_configured": true, 00:28:26.252 "data_offset": 256, 00:28:26.252 "data_size": 7936 00:28:26.252 } 00:28:26.252 ] 00:28:26.252 }' 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:26.252 07:41:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:26.509 [2024-05-16 07:41:20.019235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:26.509 [2024-05-16 07:41:20.019293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.509 [2024-05-16 07:41:20.019322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9ef780 00:28:26.509 [2024-05-16 07:41:20.019330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.509 [2024-05-16 07:41:20.019388] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.509 [2024-05-16 07:41:20.019396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:26.509 [2024-05-16 07:41:20.019414] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:26.509 [2024-05-16 07:41:20.019419] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:26.509 [2024-05-16 07:41:20.019423] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:26.509 BaseBdev1 00:28:26.509 07:41:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # sleep 1 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:27.880 "name": "raid_bdev1", 00:28:27.880 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:27.880 "strip_size_kb": 0, 00:28:27.880 "state": "online", 00:28:27.880 "raid_level": "raid1", 00:28:27.880 "superblock": true, 00:28:27.880 "num_base_bdevs": 2, 00:28:27.880 "num_base_bdevs_discovered": 1, 00:28:27.880 "num_base_bdevs_operational": 1, 00:28:27.880 "base_bdevs_list": [ 00:28:27.880 { 00:28:27.880 "name": null, 00:28:27.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.880 "is_configured": false, 00:28:27.880 "data_offset": 256, 00:28:27.880 "data_size": 7936 00:28:27.880 }, 00:28:27.880 { 00:28:27.880 "name": "BaseBdev2", 00:28:27.880 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:27.880 "is_configured": true, 00:28:27.880 "data_offset": 256, 00:28:27.880 "data_size": 7936 00:28:27.880 } 00:28:27.880 ] 00:28:27.880 }' 00:28:27.880 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:27.881 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.138 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:28.701 "name": "raid_bdev1", 00:28:28.701 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:28.701 "strip_size_kb": 0, 00:28:28.701 "state": "online", 00:28:28.701 "raid_level": "raid1", 00:28:28.701 "superblock": true, 00:28:28.701 "num_base_bdevs": 2, 00:28:28.701 "num_base_bdevs_discovered": 1, 00:28:28.701 "num_base_bdevs_operational": 1, 00:28:28.701 "base_bdevs_list": [ 00:28:28.701 { 00:28:28.701 "name": null, 00:28:28.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.701 "is_configured": false, 00:28:28.701 "data_offset": 256, 00:28:28.701 "data_size": 7936 00:28:28.701 }, 00:28:28.701 { 00:28:28.701 "name": "BaseBdev2", 00:28:28.701 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:28.701 "is_configured": true, 00:28:28.701 "data_offset": 256, 00:28:28.701 "data_size": 7936 00:28:28.701 } 00:28:28.701 ] 00:28:28.701 }' 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:28.701 07:41:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.701 [2024-05-16 07:41:22.171252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:28.701 [2024-05-16 07:41:22.171313] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:28.701 [2024-05-16 07:41:22.171317] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:28.701 request: 00:28:28.701 { 00:28:28.701 "raid_bdev": "raid_bdev1", 00:28:28.701 "base_bdev": "BaseBdev1", 00:28:28.701 "method": "bdev_raid_add_base_bdev", 00:28:28.701 "req_id": 1 00:28:28.701 } 00:28:28.701 Got JSON-RPC error response 00:28:28.701 response: 00:28:28.701 { 00:28:28.701 "code": -22, 00:28:28.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:28.701 } 00:28:28.701 07:41:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:28:28.701 07:41:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:28.701 07:41:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:28.701 07:41:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:28.701 07:41:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:30.072 "name": "raid_bdev1", 00:28:30.072 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:30.072 "strip_size_kb": 0, 00:28:30.072 "state": "online", 00:28:30.072 "raid_level": "raid1", 00:28:30.072 "superblock": true, 00:28:30.072 "num_base_bdevs": 2, 00:28:30.072 "num_base_bdevs_discovered": 1, 00:28:30.072 "num_base_bdevs_operational": 1, 00:28:30.072 "base_bdevs_list": [ 00:28:30.072 { 00:28:30.072 "name": null, 00:28:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.072 "is_configured": false, 00:28:30.072 "data_offset": 256, 00:28:30.072 "data_size": 7936 00:28:30.072 }, 00:28:30.072 { 00:28:30.072 "name": "BaseBdev2", 00:28:30.072 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:30.072 "is_configured": true, 00:28:30.072 "data_offset": 256, 00:28:30.072 "data_size": 7936 00:28:30.072 } 00:28:30.072 ] 00:28:30.072 }' 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:30.072 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:30.635 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:30.635 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:30.635 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:30.636 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:30.636 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:30.636 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.636 07:41:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.892 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:30.893 "name": "raid_bdev1", 00:28:30.893 "uuid": "a3e03d00-1357-11ef-8e8f-9dd684e56d79", 00:28:30.893 "strip_size_kb": 0, 00:28:30.893 "state": "online", 00:28:30.893 "raid_level": "raid1", 00:28:30.893 "superblock": true, 00:28:30.893 "num_base_bdevs": 2, 00:28:30.893 "num_base_bdevs_discovered": 1, 00:28:30.893 "num_base_bdevs_operational": 1, 00:28:30.893 "base_bdevs_list": [ 00:28:30.893 { 00:28:30.893 "name": null, 00:28:30.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.893 "is_configured": false, 00:28:30.893 "data_offset": 256, 00:28:30.893 "data_size": 7936 00:28:30.893 }, 00:28:30.893 { 00:28:30.893 "name": "BaseBdev2", 00:28:30.893 "uuid": "7b62c502-1a87-2a54-86ea-0f5e64591c17", 00:28:30.893 "is_configured": true, 00:28:30.893 "data_offset": 256, 00:28:30.893 "data_size": 7936 00:28:30.893 } 00:28:30.893 ] 00:28:30.893 }' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # killprocess 65948 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 65948 ']' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 65948 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps -c -o command 65948 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # tail -1 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=bdevperf 00:28:30.893 killing process with pid 65948 00:28:30.893 Received shutdown signal, test time was about 60.000000 seconds 00:28:30.893 00:28:30.893 Latency(us) 00:28:30.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.893 =================================================================================================================== 00:28:30.893 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' bdevperf = sudo ']' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65948' 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 65948 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 65948 00:28:30.893 [2024-05-16 07:41:24.240184] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:30.893 [2024-05-16 07:41:24.240218] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.893 [2024-05-16 07:41:24.240232] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.893 [2024-05-16 07:41:24.240237] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a9f0680 name raid_bdev1, state offline 00:28:30.893 [2024-05-16 07:41:24.254944] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@785 -- # return 0 00:28:30.893 ************************************ 00:28:30.893 END TEST raid_rebuild_test_sb_md_interleaved 00:28:30.893 ************************************ 00:28:30.893 00:28:30.893 real 0m27.066s 00:28:30.893 user 0m42.140s 00:28:30.893 sys 0m2.585s 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:30.893 07:41:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:31.150 07:41:24 bdev_raid -- bdev/bdev_raid.sh@850 -- # rm -f /raidrandtest 00:28:31.150 00:28:31.150 real 9m33.058s 00:28:31.150 user 17m4.552s 00:28:31.150 sys 1m24.610s 00:28:31.150 07:41:24 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:31.150 07:41:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:31.150 ************************************ 00:28:31.150 END TEST bdev_raid 00:28:31.150 ************************************ 00:28:31.150 07:41:24 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:28:31.150 07:41:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:31.150 07:41:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:31.150 07:41:24 -- common/autotest_common.sh@10 -- # set +x 00:28:31.150 ************************************ 00:28:31.150 START TEST bdevperf_config 00:28:31.150 ************************************ 00:28:31.150 07:41:24 bdevperf_config -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:28:31.150 * Looking for test storage... 00:28:31.150 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:28:31.150 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:31.150 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:31.150 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:31.150 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:31.150 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:31.150 07:41:24 bdevperf_config -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-16 07:41:24.703760] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:34.428 [2024-05-16 07:41:24.703941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:34.428 Using job config with 4 jobs 00:28:34.428 EAL: TSC is not safe to use in SMP mode 00:28:34.428 EAL: TSC is not invariant 00:28:34.428 [2024-05-16 07:41:25.205243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.428 [2024-05-16 07:41:25.296292] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:34.428 [2024-05-16 07:41:25.298671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.428 cpumask for '\''job0'\'' is too big 00:28:34.428 cpumask for '\''job1'\'' is too big 00:28:34.428 cpumask for '\''job2'\'' is too big 00:28:34.428 cpumask for '\''job3'\'' is too big 00:28:34.428 Running I/O for 2 seconds... 00:28:34.428 00:28:34.428 Latency(us) 00:28:34.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350745.56 342.52 0.00 0.00 729.60 190.17 1654.00 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350730.66 342.51 0.00 0.00 729.46 200.90 1568.18 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350711.21 342.49 0.00 0.00 729.36 177.49 1677.40 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350784.08 342.56 0.00 0.00 729.07 69.73 1685.21 00:28:34.428 =================================================================================================================== 00:28:34.428 Total : 1402971.52 1370.09 0.00 0.00 729.37 69.73 1685.21' 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-16 07:41:24.703760] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:34.428 [2024-05-16 07:41:24.703941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:34.428 Using job config with 4 jobs 00:28:34.428 EAL: TSC is not safe to use in SMP mode 00:28:34.428 EAL: TSC is not invariant 00:28:34.428 [2024-05-16 07:41:25.205243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.428 [2024-05-16 07:41:25.296292] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:34.428 [2024-05-16 07:41:25.298671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.428 cpumask for '\''job0'\'' is too big 00:28:34.428 cpumask for '\''job1'\'' is too big 00:28:34.428 cpumask for '\''job2'\'' is too big 00:28:34.428 cpumask for '\''job3'\'' is too big 00:28:34.428 Running I/O for 2 seconds... 00:28:34.428 00:28:34.428 Latency(us) 00:28:34.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350745.56 342.52 0.00 0.00 729.60 190.17 1654.00 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350730.66 342.51 0.00 0.00 729.46 200.90 1568.18 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350711.21 342.49 0.00 0.00 729.36 177.49 1677.40 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350784.08 342.56 0.00 0.00 729.07 69.73 1685.21 00:28:34.428 =================================================================================================================== 00:28:34.428 Total : 1402971.52 1370.09 0.00 0.00 729.37 69.73 1685.21' 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-16 07:41:24.703760] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:34.428 [2024-05-16 07:41:24.703941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:34.428 Using job config with 4 jobs 00:28:34.428 EAL: TSC is not safe to use in SMP mode 00:28:34.428 EAL: TSC is not invariant 00:28:34.428 [2024-05-16 07:41:25.205243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.428 [2024-05-16 07:41:25.296292] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:34.428 [2024-05-16 07:41:25.298671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.428 cpumask for '\''job0'\'' is too big 00:28:34.428 cpumask for '\''job1'\'' is too big 00:28:34.428 cpumask for '\''job2'\'' is too big 00:28:34.428 cpumask for '\''job3'\'' is too big 00:28:34.428 Running I/O for 2 seconds... 00:28:34.428 00:28:34.428 Latency(us) 00:28:34.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350745.56 342.52 0.00 0.00 729.60 190.17 1654.00 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350730.66 342.51 0.00 0.00 729.46 200.90 1568.18 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350711.21 342.49 0.00 0.00 729.36 177.49 1677.40 00:28:34.428 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:34.428 Malloc0 : 2.00 350784.08 342.56 0.00 0.00 729.07 69.73 1685.21 00:28:34.428 =================================================================================================================== 00:28:34.428 Total : 1402971.52 1370.09 0.00 0.00 729.37 69.73 1685.21' 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:28:34.428 07:41:27 bdevperf_config -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:34.428 [2024-05-16 07:41:27.533876] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:34.428 [2024-05-16 07:41:27.534038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:34.428 EAL: TSC is not safe to use in SMP mode 00:28:34.428 EAL: TSC is not invariant 00:28:34.686 [2024-05-16 07:41:27.979004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.686 [2024-05-16 07:41:28.059931] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:34.686 [2024-05-16 07:41:28.062165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.686 cpumask for 'job0' is too big 00:28:34.686 cpumask for 'job1' is too big 00:28:34.686 cpumask for 'job2' is too big 00:28:34.686 cpumask for 'job3' is too big 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:28:37.209 Running I/O for 2 seconds... 00:28:37.209 00:28:37.209 Latency(us) 00:28:37.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.209 Malloc0 : 2.00 370507.19 361.82 0.00 0.00 690.68 179.44 1529.17 00:28:37.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.209 Malloc0 : 2.00 370494.42 361.81 0.00 0.00 690.57 168.72 1513.57 00:28:37.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.209 Malloc0 : 2.00 370542.35 361.86 0.00 0.00 690.35 176.52 1521.37 00:28:37.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:37.209 Malloc0 : 2.00 370525.69 361.84 0.00 0.00 690.23 146.29 1536.97 00:28:37.209 =================================================================================================================== 00:28:37.209 Total : 1482069.66 1447.33 0.00 0.00 690.46 146.29 1536.97' 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:37.209 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:37.209 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:37.209 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:37.209 07:41:30 bdevperf_config -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:39.738 07:41:33 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-16 07:41:30.301236] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:39.738 [2024-05-16 07:41:30.301492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:39.738 Using job config with 3 jobs 00:28:39.738 EAL: TSC is not safe to use in SMP mode 00:28:39.738 EAL: TSC is not invariant 00:28:39.738 [2024-05-16 07:41:30.826540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.738 [2024-05-16 07:41:30.914032] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:39.738 [2024-05-16 07:41:30.916279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.738 cpumask for '\''job0'\'' is too big 00:28:39.738 cpumask for '\''job1'\'' is too big 00:28:39.738 cpumask for '\''job2'\'' is too big 00:28:39.738 Running I/O for 2 seconds... 00:28:39.738 00:28:39.738 Latency(us) 00:28:39.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448710.01 438.19 0.00 0.00 570.25 223.33 1763.23 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448723.36 438.21 0.00 0.00 570.10 192.12 1630.59 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448708.11 438.19 0.00 0.00 569.96 130.68 1357.53 00:28:39.739 =================================================================================================================== 00:28:39.739 Total : 1346141.47 1314.59 0.00 0.00 570.10 130.68 1763.23' 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-16 07:41:30.301236] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:39.739 [2024-05-16 07:41:30.301492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:39.739 Using job config with 3 jobs 00:28:39.739 EAL: TSC is not safe to use in SMP mode 00:28:39.739 EAL: TSC is not invariant 00:28:39.739 [2024-05-16 07:41:30.826540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.739 [2024-05-16 07:41:30.914032] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:39.739 [2024-05-16 07:41:30.916279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.739 cpumask for '\''job0'\'' is too big 00:28:39.739 cpumask for '\''job1'\'' is too big 00:28:39.739 cpumask for '\''job2'\'' is too big 00:28:39.739 Running I/O for 2 seconds... 00:28:39.739 00:28:39.739 Latency(us) 00:28:39.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448710.01 438.19 0.00 0.00 570.25 223.33 1763.23 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448723.36 438.21 0.00 0.00 570.10 192.12 1630.59 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448708.11 438.19 0.00 0.00 569.96 130.68 1357.53 00:28:39.739 =================================================================================================================== 00:28:39.739 Total : 1346141.47 1314.59 0.00 0.00 570.10 130.68 1763.23' 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-16 07:41:30.301236] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:39.739 [2024-05-16 07:41:30.301492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:39.739 Using job config with 3 jobs 00:28:39.739 EAL: TSC is not safe to use in SMP mode 00:28:39.739 EAL: TSC is not invariant 00:28:39.739 [2024-05-16 07:41:30.826540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.739 [2024-05-16 07:41:30.914032] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:39.739 [2024-05-16 07:41:30.916279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.739 cpumask for '\''job0'\'' is too big 00:28:39.739 cpumask for '\''job1'\'' is too big 00:28:39.739 cpumask for '\''job2'\'' is too big 00:28:39.739 Running I/O for 2 seconds... 00:28:39.739 00:28:39.739 Latency(us) 00:28:39.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448710.01 438.19 0.00 0.00 570.25 223.33 1763.23 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448723.36 438.21 0.00 0.00 570.10 192.12 1630.59 00:28:39.739 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:39.739 Malloc0 : 2.00 448708.11 438.19 0.00 0.00 569.96 130.68 1357.53 00:28:39.739 =================================================================================================================== 00:28:39.739 Total : 1346141.47 1314.59 0.00 0.00 570.10 130.68 1763.23' 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:28:39.739 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:39.739 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:39.739 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:39.739 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:39.739 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:39.739 07:41:33 bdevperf_config -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:43.031 07:41:35 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-16 07:41:33.174905] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:43.031 [2024-05-16 07:41:33.175126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:43.031 Using job config with 4 jobs 00:28:43.031 EAL: TSC is not safe to use in SMP mode 00:28:43.031 EAL: TSC is not invariant 00:28:43.031 [2024-05-16 07:41:33.625613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.031 [2024-05-16 07:41:33.708979] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:43.031 [2024-05-16 07:41:33.711249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.031 cpumask for '\''job0'\'' is too big 00:28:43.031 cpumask for '\''job1'\'' is too big 00:28:43.031 cpumask for '\''job2'\'' is too big 00:28:43.031 cpumask for '\''job3'\'' is too big 00:28:43.031 Running I/O for 2 seconds... 00:28:43.031 00:28:43.031 Latency(us) 00:28:43.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.031 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc0 : 2.00 173071.60 169.02 0.00 0.00 1478.81 472.01 3229.98 00:28:43.031 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc1 : 2.00 173064.00 169.01 0.00 0.00 1478.69 479.82 3214.38 00:28:43.031 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc0 : 2.00 173055.27 169.00 0.00 0.00 1478.29 485.67 2715.05 00:28:43.031 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc1 : 2.00 173045.86 168.99 0.00 0.00 1478.14 442.76 2699.45 00:28:43.031 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc0 : 2.00 173038.25 168.98 0.00 0.00 1477.71 454.46 2231.34 00:28:43.031 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc1 : 2.00 173107.55 169.05 0.00 0.00 1476.94 444.71 2200.13 00:28:43.031 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc0 : 2.00 173099.34 169.04 0.00 0.00 1476.53 353.04 1786.63 00:28:43.031 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.031 Malloc1 : 2.00 173089.58 169.03 0.00 0.00 1476.42 288.67 1794.43 00:28:43.031 =================================================================================================================== 00:28:43.031 Total : 1384571.45 1352.12 0.00 0.00 1477.69 288.67 3229.98' 00:28:43.031 07:41:35 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-16 07:41:33.174905] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:43.031 [2024-05-16 07:41:33.175126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:43.031 Using job config with 4 jobs 00:28:43.031 EAL: TSC is not safe to use in SMP mode 00:28:43.031 EAL: TSC is not invariant 00:28:43.031 [2024-05-16 07:41:33.625613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.031 [2024-05-16 07:41:33.708979] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:43.032 [2024-05-16 07:41:33.711249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.032 cpumask for '\''job0'\'' is too big 00:28:43.032 cpumask for '\''job1'\'' is too big 00:28:43.032 cpumask for '\''job2'\'' is too big 00:28:43.032 cpumask for '\''job3'\'' is too big 00:28:43.032 Running I/O for 2 seconds... 00:28:43.032 00:28:43.032 Latency(us) 00:28:43.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173071.60 169.02 0.00 0.00 1478.81 472.01 3229.98 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173064.00 169.01 0.00 0.00 1478.69 479.82 3214.38 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173055.27 169.00 0.00 0.00 1478.29 485.67 2715.05 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173045.86 168.99 0.00 0.00 1478.14 442.76 2699.45 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173038.25 168.98 0.00 0.00 1477.71 454.46 2231.34 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173107.55 169.05 0.00 0.00 1476.94 444.71 2200.13 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173099.34 169.04 0.00 0.00 1476.53 353.04 1786.63 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173089.58 169.03 0.00 0.00 1476.42 288.67 1794.43 00:28:43.032 =================================================================================================================== 00:28:43.032 Total : 1384571.45 1352.12 0.00 0.00 1477.69 288.67 3229.98' 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-05-16 07:41:33.174905] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:43.032 [2024-05-16 07:41:33.175126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:43.032 Using job config with 4 jobs 00:28:43.032 EAL: TSC is not safe to use in SMP mode 00:28:43.032 EAL: TSC is not invariant 00:28:43.032 [2024-05-16 07:41:33.625613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.032 [2024-05-16 07:41:33.708979] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:43.032 [2024-05-16 07:41:33.711249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.032 cpumask for '\''job0'\'' is too big 00:28:43.032 cpumask for '\''job1'\'' is too big 00:28:43.032 cpumask for '\''job2'\'' is too big 00:28:43.032 cpumask for '\''job3'\'' is too big 00:28:43.032 Running I/O for 2 seconds... 00:28:43.032 00:28:43.032 Latency(us) 00:28:43.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173071.60 169.02 0.00 0.00 1478.81 472.01 3229.98 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173064.00 169.01 0.00 0.00 1478.69 479.82 3214.38 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173055.27 169.00 0.00 0.00 1478.29 485.67 2715.05 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173045.86 168.99 0.00 0.00 1478.14 442.76 2699.45 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173038.25 168.98 0.00 0.00 1477.71 454.46 2231.34 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173107.55 169.05 0.00 0.00 1476.94 444.71 2200.13 00:28:43.032 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc0 : 2.00 173099.34 169.04 0.00 0.00 1476.53 353.04 1786.63 00:28:43.032 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:43.032 Malloc1 : 2.00 173089.58 169.03 0.00 0.00 1476.42 288.67 1794.43 00:28:43.032 =================================================================================================================== 00:28:43.032 Total : 1384571.45 1352.12 0.00 0.00 1477.69 288.67 3229.98' 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:28:43.032 07:41:35 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:43.032 00:28:43.032 real 0m11.427s 00:28:43.032 user 0m9.137s 00:28:43.032 sys 0m2.279s 00:28:43.032 07:41:35 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:43.032 07:41:35 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:28:43.032 ************************************ 00:28:43.032 END TEST bdevperf_config 00:28:43.032 ************************************ 00:28:43.032 07:41:35 -- spdk/autotest.sh@188 -- # uname -s 00:28:43.032 07:41:35 -- spdk/autotest.sh@188 -- # [[ FreeBSD == Linux ]] 00:28:43.032 07:41:35 -- spdk/autotest.sh@194 -- # uname -s 00:28:43.032 07:41:35 -- spdk/autotest.sh@194 -- # [[ FreeBSD == Linux ]] 00:28:43.032 07:41:35 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:28:43.032 07:41:35 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:43.032 07:41:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:43.032 07:41:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:43.032 07:41:35 -- common/autotest_common.sh@10 -- # set +x 00:28:43.032 ************************************ 00:28:43.032 START TEST blockdev_nvme 00:28:43.032 ************************************ 00:28:43.032 07:41:35 blockdev_nvme -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:43.032 * Looking for test storage... 00:28:43.032 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:43.032 07:41:36 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66687 00:28:43.032 07:41:36 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:43.033 07:41:36 blockdev_nvme -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:43.033 07:41:36 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66687 00:28:43.033 07:41:36 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 66687 ']' 00:28:43.033 07:41:36 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.033 07:41:36 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:43.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.033 07:41:36 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.033 07:41:36 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:43.033 07:41:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.033 [2024-05-16 07:41:36.140967] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:43.033 [2024-05-16 07:41:36.141154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:43.291 EAL: TSC is not safe to use in SMP mode 00:28:43.291 EAL: TSC is not invariant 00:28:43.291 [2024-05-16 07:41:36.639541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.291 [2024-05-16 07:41:36.719248] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:43.291 [2024-05-16 07:41:36.721399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.857 [2024-05-16 07:41:37.234359] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:28:43.857 07:41:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:28:43.857 07:41:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:28:43.858 07:41:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b9d99656-1357-11ef-8e8f-9dd684e56d79"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b9d99656-1357-11ef-8e8f-9dd684e56d79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:43.858 07:41:37 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:28:43.858 07:41:37 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:28:43.858 07:41:37 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:28:43.858 07:41:37 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66687 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 66687 ']' 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 66687 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@954 -- # ps -c -o command 66687 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@954 -- # tail -1 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:28:43.858 killing process with pid 66687 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66687' 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 66687 00:28:43.858 07:41:37 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 66687 00:28:44.116 07:41:37 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:44.116 07:41:37 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:44.116 07:41:37 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:44.116 07:41:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:44.116 07:41:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:44.116 ************************************ 00:28:44.116 START TEST bdev_hello_world 00:28:44.116 ************************************ 00:28:44.116 07:41:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:44.373 [2024-05-16 07:41:37.668192] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:44.373 [2024-05-16 07:41:37.668461] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:44.631 EAL: TSC is not safe to use in SMP mode 00:28:44.631 EAL: TSC is not invariant 00:28:44.631 [2024-05-16 07:41:38.141660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.889 [2024-05-16 07:41:38.225206] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:44.889 [2024-05-16 07:41:38.227531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.889 [2024-05-16 07:41:38.284798] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:28:44.889 [2024-05-16 07:41:38.352955] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:44.889 [2024-05-16 07:41:38.353032] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:44.889 [2024-05-16 07:41:38.353059] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:44.889 [2024-05-16 07:41:38.353991] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:44.889 [2024-05-16 07:41:38.354434] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:44.889 [2024-05-16 07:41:38.354460] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:44.889 [2024-05-16 07:41:38.354892] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:44.889 00:28:44.889 [2024-05-16 07:41:38.354919] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:45.147 00:28:45.147 real 0m0.880s 00:28:45.147 user 0m0.359s 00:28:45.147 sys 0m0.520s 00:28:45.147 ************************************ 00:28:45.147 END TEST bdev_hello_world 00:28:45.147 ************************************ 00:28:45.147 07:41:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:45.147 07:41:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:45.147 07:41:38 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:28:45.147 07:41:38 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:45.147 07:41:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:45.147 07:41:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:45.147 ************************************ 00:28:45.147 START TEST bdev_bounds 00:28:45.147 ************************************ 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66754 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:45.147 Process bdevio pid: 66754 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66754' 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66754 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 66754 ']' 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:45.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:45.147 07:41:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:45.147 [2024-05-16 07:41:38.589413] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:45.147 [2024-05-16 07:41:38.589576] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:45.713 EAL: TSC is not safe to use in SMP mode 00:28:45.713 EAL: TSC is not invariant 00:28:45.713 [2024-05-16 07:41:39.047256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:45.713 [2024-05-16 07:41:39.128066] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:45.713 [2024-05-16 07:41:39.128117] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:28:45.713 [2024-05-16 07:41:39.128126] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:28:45.713 [2024-05-16 07:41:39.131839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.713 [2024-05-16 07:41:39.131695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.713 [2024-05-16 07:41:39.131838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.713 [2024-05-16 07:41:39.189056] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:28:46.280 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:46.280 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:28:46.280 07:41:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:46.280 I/O targets: 00:28:46.280 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:28:46.280 00:28:46.280 00:28:46.280 CUnit - A unit testing framework for C - Version 2.1-3 00:28:46.280 http://cunit.sourceforge.net/ 00:28:46.280 00:28:46.280 00:28:46.280 Suite: bdevio tests on: Nvme0n1 00:28:46.280 Test: blockdev write read block ...passed 00:28:46.280 Test: blockdev write zeroes read block ...passed 00:28:46.280 Test: blockdev write zeroes read no split ...passed 00:28:46.280 Test: blockdev write zeroes read split ...passed 00:28:46.280 Test: blockdev write zeroes read split partial ...passed 00:28:46.280 Test: blockdev reset ...[2024-05-16 07:41:39.750654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:28:46.280 [2024-05-16 07:41:39.751799] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:46.280 passed 00:28:46.280 Test: blockdev write read 8 blocks ...passed 00:28:46.280 Test: blockdev write read size > 128k ...passed 00:28:46.280 Test: blockdev write read invalid size ...passed 00:28:46.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:46.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:46.280 Test: blockdev write read max offset ...passed 00:28:46.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:46.280 Test: blockdev writev readv 8 blocks ...passed 00:28:46.280 Test: blockdev writev readv 30 x 1block ...passed 00:28:46.280 Test: blockdev writev readv block ...passed 00:28:46.280 Test: blockdev writev readv size > 128k ...passed 00:28:46.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:46.280 Test: blockdev comparev and writev ...[2024-05-16 07:41:39.755246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1f7947000 len:0x1000 00:28:46.280 [2024-05-16 07:41:39.755286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:46.280 passed 00:28:46.280 Test: blockdev nvme passthru rw ...passed 00:28:46.280 Test: blockdev nvme passthru vendor specific ...passed 00:28:46.280 Test: blockdev nvme admin passthru ...[2024-05-16 07:41:39.755693] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:46.280 [2024-05-16 07:41:39.755709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:46.280 passed 00:28:46.280 Test: blockdev copy ...passed 00:28:46.280 00:28:46.280 Run Summary: Type Total Ran Passed Failed Inactive 00:28:46.280 suites 1 1 n/a 0 0 00:28:46.280 tests 23 23 23 0 0 00:28:46.280 asserts 152 152 152 0 n/a 00:28:46.280 00:28:46.280 Elapsed time = 0.031 seconds 00:28:46.280 0 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66754 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 66754 ']' 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 66754 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps -c -o command 66754 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # tail -1 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=bdevio 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' bdevio = sudo ']' 00:28:46.281 killing process with pid 66754 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66754' 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 66754 00:28:46.281 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 66754 00:28:46.540 07:41:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:28:46.540 00:28:46.540 real 0m1.380s 00:28:46.540 user 0m2.864s 00:28:46.540 sys 0m0.557s 00:28:46.540 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:46.540 07:41:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:46.540 ************************************ 00:28:46.540 END TEST bdev_bounds 00:28:46.540 ************************************ 00:28:46.540 07:41:39 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:28:46.540 07:41:39 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:28:46.540 07:41:39 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.540 07:41:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:46.540 ************************************ 00:28:46.540 START TEST bdev_nbd 00:28:46.540 ************************************ 00:28:46.541 07:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:28:46.541 07:41:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:28:46.541 07:41:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:28:46.541 07:41:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:28:46.541 00:28:46.541 real 0m0.005s 00:28:46.541 user 0m0.005s 00:28:46.541 sys 0m0.001s 00:28:46.541 07:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:46.541 ************************************ 00:28:46.541 END TEST bdev_nbd 00:28:46.541 ************************************ 00:28:46.541 07:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:46.541 07:41:40 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:28:46.541 07:41:40 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:28:46.541 skipping fio tests on NVMe due to multi-ns failures. 00:28:46.541 07:41:40 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:46.541 07:41:40 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:46.541 07:41:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:46.541 07:41:40 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:28:46.541 07:41:40 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.541 07:41:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:46.541 ************************************ 00:28:46.541 START TEST bdev_verify 00:28:46.541 ************************************ 00:28:46.541 07:41:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:46.541 [2024-05-16 07:41:40.061156] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:46.541 [2024-05-16 07:41:40.061407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:47.107 EAL: TSC is not safe to use in SMP mode 00:28:47.107 EAL: TSC is not invariant 00:28:47.107 [2024-05-16 07:41:40.550229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:47.107 [2024-05-16 07:41:40.647095] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:47.107 [2024-05-16 07:41:40.647154] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:28:47.107 [2024-05-16 07:41:40.649951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.107 [2024-05-16 07:41:40.649947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.366 [2024-05-16 07:41:40.707117] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:28:47.366 Running I/O for 5 seconds... 00:28:52.629 00:28:52.629 Latency(us) 00:28:52.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.629 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:52.629 Verification LBA range: start 0x0 length 0xa0000 00:28:52.629 Nvme0n1 : 5.01 20850.81 81.45 0.00 0.00 6129.59 667.06 20721.80 00:28:52.629 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:52.629 Verification LBA range: start 0xa0000 length 0xa0000 00:28:52.629 Nvme0n1 : 5.00 20153.72 78.73 0.00 0.00 6341.74 667.06 22094.93 00:28:52.629 =================================================================================================================== 00:28:52.629 Total : 41004.53 160.17 0.00 0.00 6233.85 667.06 22094.93 00:28:53.199 00:28:53.199 real 0m6.477s 00:28:53.199 user 0m11.643s 00:28:53.199 sys 0m0.562s 00:28:53.199 ************************************ 00:28:53.199 END TEST bdev_verify 00:28:53.199 ************************************ 00:28:53.199 07:41:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:53.199 07:41:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:53.199 07:41:46 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:53.199 07:41:46 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:28:53.199 07:41:46 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:53.199 07:41:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:53.199 ************************************ 00:28:53.199 START TEST bdev_verify_big_io 00:28:53.199 ************************************ 00:28:53.199 07:41:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:53.199 [2024-05-16 07:41:46.579316] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:53.199 [2024-05-16 07:41:46.579502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:28:53.764 EAL: TSC is not safe to use in SMP mode 00:28:53.764 EAL: TSC is not invariant 00:28:53.764 [2024-05-16 07:41:47.055616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:53.764 [2024-05-16 07:41:47.142656] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:28:53.764 [2024-05-16 07:41:47.142711] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:28:53.764 [2024-05-16 07:41:47.146193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.764 [2024-05-16 07:41:47.146179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.764 [2024-05-16 07:41:47.204303] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:28:53.764 Running I/O for 5 seconds... 00:28:59.028 00:28:59.028 Latency(us) 00:28:59.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.028 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:59.028 Verification LBA range: start 0x0 length 0xa000 00:28:59.028 Nvme0n1 : 5.01 6910.88 431.93 0.00 0.00 18427.34 88.75 42192.58 00:28:59.028 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:59.028 Verification LBA range: start 0xa000 length 0xa000 00:28:59.028 Nvme0n1 : 5.01 6774.10 423.38 0.00 0.00 18792.06 651.46 54675.59 00:28:59.028 =================================================================================================================== 00:28:59.028 Total : 13684.98 855.31 0.00 0.00 18607.86 88.75 54675.59 00:29:02.377 00:29:02.377 real 0m9.143s 00:29:02.377 user 0m17.037s 00:29:02.377 sys 0m0.530s 00:29:02.377 07:41:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:02.377 ************************************ 00:29:02.377 END TEST bdev_verify_big_io 00:29:02.377 ************************************ 00:29:02.377 07:41:55 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.377 07:41:55 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:02.377 07:41:55 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:02.377 07:41:55 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:02.377 07:41:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.377 ************************************ 00:29:02.377 START TEST bdev_write_zeroes 00:29:02.377 ************************************ 00:29:02.377 07:41:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:02.377 [2024-05-16 07:41:55.762614] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:02.377 [2024-05-16 07:41:55.762786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:29:02.943 EAL: TSC is not safe to use in SMP mode 00:29:02.943 EAL: TSC is not invariant 00:29:02.943 [2024-05-16 07:41:56.291220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.943 [2024-05-16 07:41:56.408323] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:02.943 [2024-05-16 07:41:56.411021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.943 [2024-05-16 07:41:56.470685] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:03.201 Running I/O for 1 seconds... 00:29:04.194 00:29:04.194 Latency(us) 00:29:04.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.194 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:04.194 Nvme0n1 : 1.00 57816.19 225.84 0.00 0.00 2211.98 706.07 14605.12 00:29:04.194 =================================================================================================================== 00:29:04.194 Total : 57816.19 225.84 0.00 0.00 2211.98 706.07 14605.12 00:29:04.453 00:29:04.453 real 0m1.990s 00:29:04.453 user 0m1.420s 00:29:04.453 sys 0m0.568s 00:29:04.453 ************************************ 00:29:04.453 END TEST bdev_write_zeroes 00:29:04.453 ************************************ 00:29:04.453 07:41:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:04.453 07:41:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:04.453 07:41:57 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:04.453 07:41:57 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:04.453 07:41:57 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:04.453 07:41:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:04.453 ************************************ 00:29:04.453 START TEST bdev_json_nonenclosed 00:29:04.453 ************************************ 00:29:04.453 07:41:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:04.453 [2024-05-16 07:41:57.795160] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:04.453 [2024-05-16 07:41:57.795348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:29:04.711 EAL: TSC is not safe to use in SMP mode 00:29:04.711 EAL: TSC is not invariant 00:29:04.711 [2024-05-16 07:41:58.259921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.969 [2024-05-16 07:41:58.367284] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:04.969 [2024-05-16 07:41:58.370508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.969 [2024-05-16 07:41:58.370575] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:04.969 [2024-05-16 07:41:58.370594] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:04.969 [2024-05-16 07:41:58.370610] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:05.227 00:29:05.227 real 0m0.735s 00:29:05.227 user 0m0.228s 00:29:05.227 sys 0m0.505s 00:29:05.227 ************************************ 00:29:05.227 END TEST bdev_json_nonenclosed 00:29:05.227 ************************************ 00:29:05.227 07:41:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:05.227 07:41:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:05.227 07:41:58 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:05.227 07:41:58 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:29:05.227 07:41:58 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:05.227 07:41:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:05.227 ************************************ 00:29:05.227 START TEST bdev_json_nonarray 00:29:05.227 ************************************ 00:29:05.227 07:41:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:05.227 [2024-05-16 07:41:58.573092] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:05.227 [2024-05-16 07:41:58.573410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:29:05.792 EAL: TSC is not safe to use in SMP mode 00:29:05.792 EAL: TSC is not invariant 00:29:05.792 [2024-05-16 07:41:59.087634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.792 [2024-05-16 07:41:59.192066] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:05.792 [2024-05-16 07:41:59.194326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.792 [2024-05-16 07:41:59.194369] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:05.792 [2024-05-16 07:41:59.194379] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:05.792 [2024-05-16 07:41:59.194390] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:05.792 00:29:05.792 real 0m0.757s 00:29:05.792 user 0m0.204s 00:29:05.792 sys 0m0.551s 00:29:05.792 07:41:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:05.792 ************************************ 00:29:05.792 END TEST bdev_json_nonarray 00:29:05.792 ************************************ 00:29:05.792 07:41:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:06.056 07:41:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:06.056 00:29:06.056 real 0m23.380s 00:29:06.056 user 0m35.468s 00:29:06.056 sys 0m4.742s 00:29:06.056 07:41:59 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:06.056 07:41:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:06.056 ************************************ 00:29:06.056 END TEST blockdev_nvme 00:29:06.056 ************************************ 00:29:06.056 07:41:59 -- spdk/autotest.sh@209 -- # uname -s 00:29:06.056 07:41:59 -- spdk/autotest.sh@209 -- # [[ FreeBSD == Linux ]] 00:29:06.056 07:41:59 -- spdk/autotest.sh@212 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:29:06.057 07:41:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:06.057 07:41:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:06.057 07:41:59 -- common/autotest_common.sh@10 -- # set +x 00:29:06.057 ************************************ 00:29:06.057 START TEST nvme 00:29:06.057 ************************************ 00:29:06.057 07:41:59 nvme -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:29:06.057 * Looking for test storage... 00:29:06.057 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:29:06.057 07:41:59 nvme -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:06.324 hw.nic_uio.bdfs="0:16:0" 00:29:06.324 07:41:59 nvme -- nvme/nvme.sh@79 -- # uname 00:29:06.324 07:41:59 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:29:06.324 07:41:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:06.324 07:41:59 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:29:06.324 07:41:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:06.324 07:41:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:06.324 ************************************ 00:29:06.324 START TEST nvme_reset 00:29:06.324 ************************************ 00:29:06.324 07:41:59 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:06.891 EAL: TSC is not safe to use in SMP mode 00:29:06.891 EAL: TSC is not invariant 00:29:06.891 [2024-05-16 07:42:00.250244] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:06.891 Initializing NVMe Controllers 00:29:06.891 Skipping QEMU NVMe SSD at 0000:00:10.0 00:29:06.891 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:29:06.891 00:29:06.891 real 0m0.529s 00:29:06.891 user 0m0.011s 00:29:06.891 sys 0m0.518s 00:29:06.891 07:42:00 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:06.891 07:42:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:29:06.891 ************************************ 00:29:06.891 END TEST nvme_reset 00:29:06.891 ************************************ 00:29:06.891 07:42:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:29:06.891 07:42:00 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:06.891 07:42:00 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:06.891 07:42:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:06.891 ************************************ 00:29:06.891 START TEST nvme_identify 00:29:06.891 ************************************ 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:29:06.891 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:29:06.891 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:29:06.891 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:29:06.891 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:06.891 07:42:00 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:29:06.891 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:29:07.457 EAL: TSC is not safe to use in SMP mode 00:29:07.457 EAL: TSC is not invariant 00:29:07.457 [2024-05-16 07:42:00.867797] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:07.457 ===================================================== 00:29:07.457 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:07.457 ===================================================== 00:29:07.457 Controller Capabilities/Features 00:29:07.457 ================================ 00:29:07.457 Vendor ID: 1b36 00:29:07.457 Subsystem Vendor ID: 1af4 00:29:07.457 Serial Number: 12340 00:29:07.457 Model Number: QEMU NVMe Ctrl 00:29:07.457 Firmware Version: 8.0.0 00:29:07.457 Recommended Arb Burst: 6 00:29:07.457 IEEE OUI Identifier: 00 54 52 00:29:07.457 Multi-path I/O 00:29:07.457 May have multiple subsystem ports: No 00:29:07.457 May have multiple controllers: No 00:29:07.457 Associated with SR-IOV VF: No 00:29:07.457 Max Data Transfer Size: 524288 00:29:07.457 Max Number of Namespaces: 256 00:29:07.457 Max Number of I/O Queues: 64 00:29:07.457 NVMe Specification Version (VS): 1.4 00:29:07.457 NVMe Specification Version (Identify): 1.4 00:29:07.457 Maximum Queue Entries: 2048 00:29:07.457 Contiguous Queues Required: Yes 00:29:07.457 Arbitration Mechanisms Supported 00:29:07.457 Weighted Round Robin: Not Supported 00:29:07.457 Vendor Specific: Not Supported 00:29:07.457 Reset Timeout: 7500 ms 00:29:07.457 Doorbell Stride: 4 bytes 00:29:07.457 NVM Subsystem Reset: Not Supported 00:29:07.457 Command Sets Supported 00:29:07.457 NVM Command Set: Supported 00:29:07.457 Boot Partition: Not Supported 00:29:07.457 Memory Page Size Minimum: 4096 bytes 00:29:07.457 Memory Page Size Maximum: 65536 bytes 00:29:07.457 Persistent Memory Region: Not Supported 00:29:07.457 Optional Asynchronous Events Supported 00:29:07.457 Namespace Attribute Notices: Supported 00:29:07.457 Firmware Activation Notices: Not Supported 00:29:07.457 ANA Change Notices: Not Supported 00:29:07.457 PLE Aggregate Log Change Notices: Not Supported 00:29:07.457 LBA Status Info Alert Notices: Not Supported 00:29:07.457 EGE Aggregate Log Change Notices: Not Supported 00:29:07.457 Normal NVM Subsystem Shutdown event: Not Supported 00:29:07.457 Zone Descriptor Change Notices: Not Supported 00:29:07.457 Discovery Log Change Notices: Not Supported 00:29:07.457 Controller Attributes 00:29:07.457 128-bit Host Identifier: Not Supported 00:29:07.457 Non-Operational Permissive Mode: Not Supported 00:29:07.457 NVM Sets: Not Supported 00:29:07.457 Read Recovery Levels: Not Supported 00:29:07.457 Endurance Groups: Not Supported 00:29:07.457 Predictable Latency Mode: Not Supported 00:29:07.457 Traffic Based Keep ALive: Not Supported 00:29:07.457 Namespace Granularity: Not Supported 00:29:07.457 SQ Associations: Not Supported 00:29:07.457 UUID List: Not Supported 00:29:07.457 Multi-Domain Subsystem: Not Supported 00:29:07.457 Fixed Capacity Management: Not Supported 00:29:07.457 Variable Capacity Management: Not Supported 00:29:07.457 Delete Endurance Group: Not Supported 00:29:07.457 Delete NVM Set: Not Supported 00:29:07.457 Extended LBA Formats Supported: Supported 00:29:07.457 Flexible Data Placement Supported: Not Supported 00:29:07.457 00:29:07.457 Controller Memory Buffer Support 00:29:07.457 ================================ 00:29:07.457 Supported: No 00:29:07.457 00:29:07.457 Persistent Memory Region Support 00:29:07.457 ================================ 00:29:07.457 Supported: No 00:29:07.457 00:29:07.457 Admin Command Set Attributes 00:29:07.457 ============================ 00:29:07.457 Security Send/Receive: Not Supported 00:29:07.457 Format NVM: Supported 00:29:07.457 Firmware Activate/Download: Not Supported 00:29:07.457 Namespace Management: Supported 00:29:07.457 Device Self-Test: Not Supported 00:29:07.457 Directives: Supported 00:29:07.457 NVMe-MI: Not Supported 00:29:07.457 Virtualization Management: Not Supported 00:29:07.457 Doorbell Buffer Config: Supported 00:29:07.457 Get LBA Status Capability: Not Supported 00:29:07.457 Command & Feature Lockdown Capability: Not Supported 00:29:07.457 Abort Command Limit: 4 00:29:07.457 Async Event Request Limit: 4 00:29:07.457 Number of Firmware Slots: N/A 00:29:07.457 Firmware Slot 1 Read-Only: N/A 00:29:07.457 Firmware Activation Without Reset: N/A 00:29:07.457 Multiple Update Detection Support: N/A 00:29:07.457 Firmware Update Granularity: No Information Provided 00:29:07.457 Per-Namespace SMART Log: Yes 00:29:07.457 Asymmetric Namespace Access Log Page: Not Supported 00:29:07.457 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:07.457 Command Effects Log Page: Supported 00:29:07.457 Get Log Page Extended Data: Supported 00:29:07.457 Telemetry Log Pages: Not Supported 00:29:07.457 Persistent Event Log Pages: Not Supported 00:29:07.457 Supported Log Pages Log Page: May Support 00:29:07.457 Commands Supported & Effects Log Page: Not Supported 00:29:07.458 Feature Identifiers & Effects Log Page:May Support 00:29:07.458 NVMe-MI Commands & Effects Log Page: May Support 00:29:07.458 Data Area 4 for Telemetry Log: Not Supported 00:29:07.458 Error Log Page Entries Supported: 1 00:29:07.458 Keep Alive: Not Supported 00:29:07.458 00:29:07.458 NVM Command Set Attributes 00:29:07.458 ========================== 00:29:07.458 Submission Queue Entry Size 00:29:07.458 Max: 64 00:29:07.458 Min: 64 00:29:07.458 Completion Queue Entry Size 00:29:07.458 Max: 16 00:29:07.458 Min: 16 00:29:07.458 Number of Namespaces: 256 00:29:07.458 Compare Command: Supported 00:29:07.458 Write Uncorrectable Command: Not Supported 00:29:07.458 Dataset Management Command: Supported 00:29:07.458 Write Zeroes Command: Supported 00:29:07.458 Set Features Save Field: Supported 00:29:07.458 Reservations: Not Supported 00:29:07.458 Timestamp: Supported 00:29:07.458 Copy: Supported 00:29:07.458 Volatile Write Cache: Present 00:29:07.458 Atomic Write Unit (Normal): 1 00:29:07.458 Atomic Write Unit (PFail): 1 00:29:07.458 Atomic Compare & Write Unit: 1 00:29:07.458 Fused Compare & Write: Not Supported 00:29:07.458 Scatter-Gather List 00:29:07.458 SGL Command Set: Supported 00:29:07.458 SGL Keyed: Not Supported 00:29:07.458 SGL Bit Bucket Descriptor: Not Supported 00:29:07.458 SGL Metadata Pointer: Not Supported 00:29:07.458 Oversized SGL: Not Supported 00:29:07.458 SGL Metadata Address: Not Supported 00:29:07.458 SGL Offset: Not Supported 00:29:07.458 Transport SGL Data Block: Not Supported 00:29:07.458 Replay Protected Memory Block: Not Supported 00:29:07.458 00:29:07.458 Firmware Slot Information 00:29:07.458 ========================= 00:29:07.458 Active slot: 1 00:29:07.458 Slot 1 Firmware Revision: 1.0 00:29:07.458 00:29:07.458 00:29:07.458 Commands Supported and Effects 00:29:07.458 ============================== 00:29:07.458 Admin Commands 00:29:07.458 -------------- 00:29:07.458 Delete I/O Submission Queue (00h): Supported 00:29:07.458 Create I/O Submission Queue (01h): Supported 00:29:07.458 Get Log Page (02h): Supported 00:29:07.458 Delete I/O Completion Queue (04h): Supported 00:29:07.458 Create I/O Completion Queue (05h): Supported 00:29:07.458 Identify (06h): Supported 00:29:07.458 Abort (08h): Supported 00:29:07.458 Set Features (09h): Supported 00:29:07.458 Get Features (0Ah): Supported 00:29:07.458 Asynchronous Event Request (0Ch): Supported 00:29:07.458 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:07.458 Directive Send (19h): Supported 00:29:07.458 Directive Receive (1Ah): Supported 00:29:07.458 Virtualization Management (1Ch): Supported 00:29:07.458 Doorbell Buffer Config (7Ch): Supported 00:29:07.458 Format NVM (80h): Supported LBA-Change 00:29:07.458 I/O Commands 00:29:07.458 ------------ 00:29:07.458 Flush (00h): Supported LBA-Change 00:29:07.458 Write (01h): Supported LBA-Change 00:29:07.458 Read (02h): Supported 00:29:07.458 Compare (05h): Supported 00:29:07.458 Write Zeroes (08h): Supported LBA-Change 00:29:07.458 Dataset Management (09h): Supported LBA-Change 00:29:07.458 Unknown (0Ch): Supported 00:29:07.458 Unknown (12h): Supported 00:29:07.458 Copy (19h): Supported LBA-Change 00:29:07.458 Unknown (1Dh): Supported LBA-Change 00:29:07.458 00:29:07.458 Error Log 00:29:07.458 ========= 00:29:07.458 00:29:07.458 Arbitration 00:29:07.458 =========== 00:29:07.458 Arbitration Burst: no limit 00:29:07.458 00:29:07.458 Power Management 00:29:07.458 ================ 00:29:07.458 Number of Power States: 1 00:29:07.458 Current Power State: Power State #0 00:29:07.458 Power State #0: 00:29:07.458 Max Power: 25.00 W 00:29:07.458 Non-Operational State: Operational 00:29:07.458 Entry Latency: 16 microseconds 00:29:07.458 Exit Latency: 4 microseconds 00:29:07.458 Relative Read Throughput: 0 00:29:07.458 Relative Read Latency: 0 00:29:07.458 Relative Write Throughput: 0 00:29:07.458 Relative Write Latency: 0 00:29:07.458 Idle Power: Not Reported 00:29:07.458 Active Power: Not Reported 00:29:07.458 Non-Operational Permissive Mode: Not Supported 00:29:07.458 00:29:07.458 Health Information 00:29:07.458 ================== 00:29:07.458 Critical Warnings: 00:29:07.458 Available Spare Space: OK 00:29:07.458 Temperature: OK 00:29:07.458 Device Reliability: OK 00:29:07.458 Read Only: No 00:29:07.458 Volatile Memory Backup: OK 00:29:07.458 Current Temperature: 323 Kelvin (50 Celsius) 00:29:07.458 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:07.458 Available Spare: 0% 00:29:07.458 Available Spare Threshold: 0% 00:29:07.458 Life Percentage Used: 0% 00:29:07.458 Data Units Read: 10437 00:29:07.458 Data Units Written: 10422 00:29:07.458 Host Read Commands: 274003 00:29:07.458 Host Write Commands: 273852 00:29:07.458 Controller Busy Time: 0 minutes 00:29:07.458 Power Cycles: 0 00:29:07.458 Power On Hours: 0 hours 00:29:07.458 Unsafe Shutdowns: 0 00:29:07.458 Unrecoverable Media Errors: 0 00:29:07.458 Lifetime Error Log Entries: 0 00:29:07.458 Warning Temperature Time: 0 minutes 00:29:07.458 Critical Temperature Time: 0 minutes 00:29:07.458 00:29:07.458 Number of Queues 00:29:07.458 ================ 00:29:07.458 Number of I/O Submission Queues: 64 00:29:07.458 Number of I/O Completion Queues: 64 00:29:07.458 00:29:07.458 ZNS Specific Controller Data 00:29:07.458 ============================ 00:29:07.458 Zone Append Size Limit: 0 00:29:07.458 00:29:07.458 00:29:07.458 Active Namespaces 00:29:07.458 ================= 00:29:07.458 Namespace ID:1 00:29:07.458 Error Recovery Timeout: Unlimited 00:29:07.458 Command Set Identifier: NVM (00h) 00:29:07.458 Deallocate: Supported 00:29:07.458 Deallocated/Unwritten Error: Supported 00:29:07.458 Deallocated Read Value: All 0x00 00:29:07.458 Deallocate in Write Zeroes: Not Supported 00:29:07.458 Deallocated Guard Field: 0xFFFF 00:29:07.458 Flush: Supported 00:29:07.458 Reservation: Not Supported 00:29:07.458 Namespace Sharing Capabilities: Private 00:29:07.458 Size (in LBAs): 1310720 (5GiB) 00:29:07.458 Capacity (in LBAs): 1310720 (5GiB) 00:29:07.458 Utilization (in LBAs): 1310720 (5GiB) 00:29:07.458 Thin Provisioning: Not Supported 00:29:07.458 Per-NS Atomic Units: No 00:29:07.458 Maximum Single Source Range Length: 128 00:29:07.458 Maximum Copy Length: 128 00:29:07.458 Maximum Source Range Count: 128 00:29:07.458 NGUID/EUI64 Never Reused: No 00:29:07.458 Namespace Write Protected: No 00:29:07.458 Number of LBA Formats: 8 00:29:07.458 Current LBA Format: LBA Format #04 00:29:07.458 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:07.458 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:07.458 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:07.458 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:07.458 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:07.458 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:07.458 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:07.458 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:07.458 00:29:07.458 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:29:07.458 07:42:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:08.024 EAL: TSC is not safe to use in SMP mode 00:29:08.024 EAL: TSC is not invariant 00:29:08.024 [2024-05-16 07:42:01.391389] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:08.024 ===================================================== 00:29:08.024 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:08.024 ===================================================== 00:29:08.024 Controller Capabilities/Features 00:29:08.024 ================================ 00:29:08.024 Vendor ID: 1b36 00:29:08.024 Subsystem Vendor ID: 1af4 00:29:08.024 Serial Number: 12340 00:29:08.024 Model Number: QEMU NVMe Ctrl 00:29:08.024 Firmware Version: 8.0.0 00:29:08.024 Recommended Arb Burst: 6 00:29:08.024 IEEE OUI Identifier: 00 54 52 00:29:08.024 Multi-path I/O 00:29:08.024 May have multiple subsystem ports: No 00:29:08.024 May have multiple controllers: No 00:29:08.024 Associated with SR-IOV VF: No 00:29:08.024 Max Data Transfer Size: 524288 00:29:08.024 Max Number of Namespaces: 256 00:29:08.024 Max Number of I/O Queues: 64 00:29:08.024 NVMe Specification Version (VS): 1.4 00:29:08.024 NVMe Specification Version (Identify): 1.4 00:29:08.024 Maximum Queue Entries: 2048 00:29:08.024 Contiguous Queues Required: Yes 00:29:08.024 Arbitration Mechanisms Supported 00:29:08.024 Weighted Round Robin: Not Supported 00:29:08.024 Vendor Specific: Not Supported 00:29:08.024 Reset Timeout: 7500 ms 00:29:08.024 Doorbell Stride: 4 bytes 00:29:08.024 NVM Subsystem Reset: Not Supported 00:29:08.024 Command Sets Supported 00:29:08.024 NVM Command Set: Supported 00:29:08.024 Boot Partition: Not Supported 00:29:08.024 Memory Page Size Minimum: 4096 bytes 00:29:08.024 Memory Page Size Maximum: 65536 bytes 00:29:08.024 Persistent Memory Region: Not Supported 00:29:08.024 Optional Asynchronous Events Supported 00:29:08.024 Namespace Attribute Notices: Supported 00:29:08.024 Firmware Activation Notices: Not Supported 00:29:08.024 ANA Change Notices: Not Supported 00:29:08.024 PLE Aggregate Log Change Notices: Not Supported 00:29:08.024 LBA Status Info Alert Notices: Not Supported 00:29:08.024 EGE Aggregate Log Change Notices: Not Supported 00:29:08.024 Normal NVM Subsystem Shutdown event: Not Supported 00:29:08.024 Zone Descriptor Change Notices: Not Supported 00:29:08.024 Discovery Log Change Notices: Not Supported 00:29:08.024 Controller Attributes 00:29:08.024 128-bit Host Identifier: Not Supported 00:29:08.024 Non-Operational Permissive Mode: Not Supported 00:29:08.024 NVM Sets: Not Supported 00:29:08.024 Read Recovery Levels: Not Supported 00:29:08.024 Endurance Groups: Not Supported 00:29:08.024 Predictable Latency Mode: Not Supported 00:29:08.024 Traffic Based Keep ALive: Not Supported 00:29:08.024 Namespace Granularity: Not Supported 00:29:08.024 SQ Associations: Not Supported 00:29:08.024 UUID List: Not Supported 00:29:08.024 Multi-Domain Subsystem: Not Supported 00:29:08.024 Fixed Capacity Management: Not Supported 00:29:08.024 Variable Capacity Management: Not Supported 00:29:08.024 Delete Endurance Group: Not Supported 00:29:08.024 Delete NVM Set: Not Supported 00:29:08.024 Extended LBA Formats Supported: Supported 00:29:08.024 Flexible Data Placement Supported: Not Supported 00:29:08.024 00:29:08.024 Controller Memory Buffer Support 00:29:08.024 ================================ 00:29:08.024 Supported: No 00:29:08.024 00:29:08.024 Persistent Memory Region Support 00:29:08.024 ================================ 00:29:08.024 Supported: No 00:29:08.024 00:29:08.024 Admin Command Set Attributes 00:29:08.024 ============================ 00:29:08.024 Security Send/Receive: Not Supported 00:29:08.024 Format NVM: Supported 00:29:08.024 Firmware Activate/Download: Not Supported 00:29:08.024 Namespace Management: Supported 00:29:08.024 Device Self-Test: Not Supported 00:29:08.024 Directives: Supported 00:29:08.024 NVMe-MI: Not Supported 00:29:08.024 Virtualization Management: Not Supported 00:29:08.024 Doorbell Buffer Config: Supported 00:29:08.024 Get LBA Status Capability: Not Supported 00:29:08.025 Command & Feature Lockdown Capability: Not Supported 00:29:08.025 Abort Command Limit: 4 00:29:08.025 Async Event Request Limit: 4 00:29:08.025 Number of Firmware Slots: N/A 00:29:08.025 Firmware Slot 1 Read-Only: N/A 00:29:08.025 Firmware Activation Without Reset: N/A 00:29:08.025 Multiple Update Detection Support: N/A 00:29:08.025 Firmware Update Granularity: No Information Provided 00:29:08.025 Per-Namespace SMART Log: Yes 00:29:08.025 Asymmetric Namespace Access Log Page: Not Supported 00:29:08.025 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:08.025 Command Effects Log Page: Supported 00:29:08.025 Get Log Page Extended Data: Supported 00:29:08.025 Telemetry Log Pages: Not Supported 00:29:08.025 Persistent Event Log Pages: Not Supported 00:29:08.025 Supported Log Pages Log Page: May Support 00:29:08.025 Commands Supported & Effects Log Page: Not Supported 00:29:08.025 Feature Identifiers & Effects Log Page:May Support 00:29:08.025 NVMe-MI Commands & Effects Log Page: May Support 00:29:08.025 Data Area 4 for Telemetry Log: Not Supported 00:29:08.025 Error Log Page Entries Supported: 1 00:29:08.025 Keep Alive: Not Supported 00:29:08.025 00:29:08.025 NVM Command Set Attributes 00:29:08.025 ========================== 00:29:08.025 Submission Queue Entry Size 00:29:08.025 Max: 64 00:29:08.025 Min: 64 00:29:08.025 Completion Queue Entry Size 00:29:08.025 Max: 16 00:29:08.025 Min: 16 00:29:08.025 Number of Namespaces: 256 00:29:08.025 Compare Command: Supported 00:29:08.025 Write Uncorrectable Command: Not Supported 00:29:08.025 Dataset Management Command: Supported 00:29:08.025 Write Zeroes Command: Supported 00:29:08.025 Set Features Save Field: Supported 00:29:08.025 Reservations: Not Supported 00:29:08.025 Timestamp: Supported 00:29:08.025 Copy: Supported 00:29:08.025 Volatile Write Cache: Present 00:29:08.025 Atomic Write Unit (Normal): 1 00:29:08.025 Atomic Write Unit (PFail): 1 00:29:08.025 Atomic Compare & Write Unit: 1 00:29:08.025 Fused Compare & Write: Not Supported 00:29:08.025 Scatter-Gather List 00:29:08.025 SGL Command Set: Supported 00:29:08.025 SGL Keyed: Not Supported 00:29:08.025 SGL Bit Bucket Descriptor: Not Supported 00:29:08.025 SGL Metadata Pointer: Not Supported 00:29:08.025 Oversized SGL: Not Supported 00:29:08.025 SGL Metadata Address: Not Supported 00:29:08.025 SGL Offset: Not Supported 00:29:08.025 Transport SGL Data Block: Not Supported 00:29:08.025 Replay Protected Memory Block: Not Supported 00:29:08.025 00:29:08.025 Firmware Slot Information 00:29:08.025 ========================= 00:29:08.025 Active slot: 1 00:29:08.025 Slot 1 Firmware Revision: 1.0 00:29:08.025 00:29:08.025 00:29:08.025 Commands Supported and Effects 00:29:08.025 ============================== 00:29:08.025 Admin Commands 00:29:08.025 -------------- 00:29:08.025 Delete I/O Submission Queue (00h): Supported 00:29:08.025 Create I/O Submission Queue (01h): Supported 00:29:08.025 Get Log Page (02h): Supported 00:29:08.025 Delete I/O Completion Queue (04h): Supported 00:29:08.025 Create I/O Completion Queue (05h): Supported 00:29:08.025 Identify (06h): Supported 00:29:08.025 Abort (08h): Supported 00:29:08.025 Set Features (09h): Supported 00:29:08.025 Get Features (0Ah): Supported 00:29:08.025 Asynchronous Event Request (0Ch): Supported 00:29:08.025 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:08.025 Directive Send (19h): Supported 00:29:08.025 Directive Receive (1Ah): Supported 00:29:08.025 Virtualization Management (1Ch): Supported 00:29:08.025 Doorbell Buffer Config (7Ch): Supported 00:29:08.025 Format NVM (80h): Supported LBA-Change 00:29:08.025 I/O Commands 00:29:08.025 ------------ 00:29:08.025 Flush (00h): Supported LBA-Change 00:29:08.025 Write (01h): Supported LBA-Change 00:29:08.025 Read (02h): Supported 00:29:08.025 Compare (05h): Supported 00:29:08.025 Write Zeroes (08h): Supported LBA-Change 00:29:08.025 Dataset Management (09h): Supported LBA-Change 00:29:08.025 Unknown (0Ch): Supported 00:29:08.025 Unknown (12h): Supported 00:29:08.025 Copy (19h): Supported LBA-Change 00:29:08.025 Unknown (1Dh): Supported LBA-Change 00:29:08.025 00:29:08.025 Error Log 00:29:08.025 ========= 00:29:08.025 00:29:08.025 Arbitration 00:29:08.025 =========== 00:29:08.025 Arbitration Burst: no limit 00:29:08.025 00:29:08.025 Power Management 00:29:08.025 ================ 00:29:08.025 Number of Power States: 1 00:29:08.025 Current Power State: Power State #0 00:29:08.025 Power State #0: 00:29:08.025 Max Power: 25.00 W 00:29:08.025 Non-Operational State: Operational 00:29:08.025 Entry Latency: 16 microseconds 00:29:08.025 Exit Latency: 4 microseconds 00:29:08.025 Relative Read Throughput: 0 00:29:08.025 Relative Read Latency: 0 00:29:08.025 Relative Write Throughput: 0 00:29:08.025 Relative Write Latency: 0 00:29:08.025 Idle Power: Not Reported 00:29:08.025 Active Power: Not Reported 00:29:08.025 Non-Operational Permissive Mode: Not Supported 00:29:08.025 00:29:08.025 Health Information 00:29:08.025 ================== 00:29:08.025 Critical Warnings: 00:29:08.025 Available Spare Space: OK 00:29:08.025 Temperature: OK 00:29:08.025 Device Reliability: OK 00:29:08.025 Read Only: No 00:29:08.025 Volatile Memory Backup: OK 00:29:08.025 Current Temperature: 323 Kelvin (50 Celsius) 00:29:08.025 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:08.025 Available Spare: 0% 00:29:08.025 Available Spare Threshold: 0% 00:29:08.025 Life Percentage Used: 0% 00:29:08.025 Data Units Read: 10437 00:29:08.025 Data Units Written: 10422 00:29:08.025 Host Read Commands: 274003 00:29:08.025 Host Write Commands: 273852 00:29:08.025 Controller Busy Time: 0 minutes 00:29:08.025 Power Cycles: 0 00:29:08.025 Power On Hours: 0 hours 00:29:08.025 Unsafe Shutdowns: 0 00:29:08.025 Unrecoverable Media Errors: 0 00:29:08.025 Lifetime Error Log Entries: 0 00:29:08.025 Warning Temperature Time: 0 minutes 00:29:08.025 Critical Temperature Time: 0 minutes 00:29:08.025 00:29:08.025 Number of Queues 00:29:08.025 ================ 00:29:08.025 Number of I/O Submission Queues: 64 00:29:08.025 Number of I/O Completion Queues: 64 00:29:08.025 00:29:08.025 ZNS Specific Controller Data 00:29:08.025 ============================ 00:29:08.025 Zone Append Size Limit: 0 00:29:08.025 00:29:08.025 00:29:08.025 Active Namespaces 00:29:08.025 ================= 00:29:08.025 Namespace ID:1 00:29:08.025 Error Recovery Timeout: Unlimited 00:29:08.025 Command Set Identifier: NVM (00h) 00:29:08.025 Deallocate: Supported 00:29:08.025 Deallocated/Unwritten Error: Supported 00:29:08.025 Deallocated Read Value: All 0x00 00:29:08.025 Deallocate in Write Zeroes: Not Supported 00:29:08.025 Deallocated Guard Field: 0xFFFF 00:29:08.025 Flush: Supported 00:29:08.025 Reservation: Not Supported 00:29:08.025 Namespace Sharing Capabilities: Private 00:29:08.025 Size (in LBAs): 1310720 (5GiB) 00:29:08.025 Capacity (in LBAs): 1310720 (5GiB) 00:29:08.025 Utilization (in LBAs): 1310720 (5GiB) 00:29:08.025 Thin Provisioning: Not Supported 00:29:08.025 Per-NS Atomic Units: No 00:29:08.025 Maximum Single Source Range Length: 128 00:29:08.025 Maximum Copy Length: 128 00:29:08.025 Maximum Source Range Count: 128 00:29:08.025 NGUID/EUI64 Never Reused: No 00:29:08.025 Namespace Write Protected: No 00:29:08.025 Number of LBA Formats: 8 00:29:08.025 Current LBA Format: LBA Format #04 00:29:08.025 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:08.025 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:08.025 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:08.025 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:08.025 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:08.025 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:08.025 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:08.025 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:08.025 00:29:08.025 00:29:08.025 real 0m1.093s 00:29:08.025 user 0m0.052s 00:29:08.025 sys 0m1.058s 00:29:08.025 07:42:01 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:08.025 07:42:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:29:08.025 ************************************ 00:29:08.025 END TEST nvme_identify 00:29:08.025 ************************************ 00:29:08.025 07:42:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:29:08.025 07:42:01 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:08.025 07:42:01 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:08.025 07:42:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:08.025 ************************************ 00:29:08.025 START TEST nvme_perf 00:29:08.025 ************************************ 00:29:08.025 07:42:01 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:29:08.025 07:42:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:29:08.590 EAL: TSC is not safe to use in SMP mode 00:29:08.590 EAL: TSC is not invariant 00:29:08.590 [2024-05-16 07:42:01.956154] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:09.525 Initializing NVMe Controllers 00:29:09.525 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:09.525 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:29:09.525 Initialization complete. Launching workers. 00:29:09.525 ======================================================== 00:29:09.525 Latency(us) 00:29:09.525 Device Information : IOPS MiB/s Average min max 00:29:09.525 PCIE (0000:00:10.0) NSID 1 from core 0: 77616.42 909.57 1649.62 168.69 5036.22 00:29:09.525 ======================================================== 00:29:09.525 Total : 77616.42 909.57 1649.62 168.69 5036.22 00:29:09.525 00:29:09.525 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:09.525 ================================================================================= 00:29:09.525 1.00000% : 1139.075us 00:29:09.525 10.00000% : 1295.112us 00:29:09.525 25.00000% : 1419.942us 00:29:09.525 50.00000% : 1575.980us 00:29:09.525 75.00000% : 1755.423us 00:29:09.525 90.00000% : 2044.093us 00:29:09.525 95.00000% : 2356.168us 00:29:09.525 98.00000% : 3073.941us 00:29:09.525 99.00000% : 3339.205us 00:29:09.525 99.50000% : 3776.111us 00:29:09.525 99.90000% : 4868.374us 00:29:09.525 99.99000% : 5024.412us 00:29:09.525 99.99900% : 5055.619us 00:29:09.525 99.99990% : 5055.619us 00:29:09.525 99.99999% : 5055.619us 00:29:09.525 00:29:09.525 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:09.525 ============================================================================== 00:29:09.525 Range in us Cumulative IO count 00:29:09.525 167.740 - 168.716: 0.0013% ( 1) 00:29:09.525 176.518 - 177.493: 0.0039% ( 2) 00:29:09.525 178.468 - 179.443: 0.0052% ( 1) 00:29:09.525 182.369 - 183.344: 0.0064% ( 1) 00:29:09.525 184.319 - 185.295: 0.0077% ( 1) 00:29:09.525 189.196 - 190.171: 0.0090% ( 1) 00:29:09.525 190.171 - 191.146: 0.0103% ( 1) 00:29:09.525 191.146 - 192.121: 0.0116% ( 1) 00:29:09.525 193.097 - 194.072: 0.0129% ( 1) 00:29:09.525 257.462 - 259.413: 0.0142% ( 1) 00:29:09.525 259.413 - 261.363: 0.0155% ( 1) 00:29:09.525 261.363 - 263.314: 0.0180% ( 2) 00:29:09.525 263.314 - 265.264: 0.0193% ( 1) 00:29:09.525 312.075 - 314.026: 0.0206% ( 1) 00:29:09.525 314.026 - 315.976: 0.0219% ( 1) 00:29:09.525 315.976 - 317.927: 0.0232% ( 1) 00:29:09.525 317.927 - 319.877: 0.0271% ( 3) 00:29:09.525 319.877 - 321.828: 0.0283% ( 1) 00:29:09.525 321.828 - 323.778: 0.0309% ( 2) 00:29:09.525 323.778 - 325.729: 0.0335% ( 2) 00:29:09.525 327.679 - 329.630: 0.0348% ( 1) 00:29:09.525 329.630 - 331.580: 0.0361% ( 1) 00:29:09.525 333.530 - 335.481: 0.0374% ( 1) 00:29:09.525 351.085 - 353.035: 0.0399% ( 2) 00:29:09.525 353.035 - 354.986: 0.0412% ( 1) 00:29:09.525 354.986 - 356.936: 0.0425% ( 1) 00:29:09.525 356.936 - 358.887: 0.0438% ( 1) 00:29:09.525 358.887 - 360.837: 0.0451% ( 1) 00:29:09.525 360.837 - 362.788: 0.0464% ( 1) 00:29:09.525 458.361 - 460.311: 0.0477% ( 1) 00:29:09.525 460.311 - 462.261: 0.0490% ( 1) 00:29:09.525 462.261 - 464.212: 0.0502% ( 1) 00:29:09.525 464.212 - 466.162: 0.0528% ( 2) 00:29:09.525 466.162 - 468.113: 0.0541% ( 1) 00:29:09.525 468.113 - 470.063: 0.0554% ( 1) 00:29:09.525 470.063 - 472.014: 0.0580% ( 2) 00:29:09.525 472.014 - 473.964: 0.0593% ( 1) 00:29:09.525 473.964 - 475.915: 0.0605% ( 1) 00:29:09.525 475.915 - 477.865: 0.0631% ( 2) 00:29:09.525 477.865 - 479.816: 0.0644% ( 1) 00:29:09.525 479.816 - 481.766: 0.0657% ( 1) 00:29:09.525 481.766 - 483.717: 0.0683% ( 2) 00:29:09.525 483.717 - 485.667: 0.0696% ( 1) 00:29:09.525 485.667 - 487.618: 0.0708% ( 1) 00:29:09.525 487.618 - 489.568: 0.0734% ( 2) 00:29:09.525 489.568 - 491.519: 0.0747% ( 1) 00:29:09.525 491.519 - 493.469: 0.0760% ( 1) 00:29:09.525 493.469 - 495.419: 0.0786% ( 2) 00:29:09.525 495.419 - 497.370: 0.0799% ( 1) 00:29:09.525 497.370 - 499.320: 0.0824% ( 2) 00:29:09.525 499.320 - 503.221: 0.0850% ( 2) 00:29:09.525 503.221 - 507.122: 0.0876% ( 2) 00:29:09.525 507.122 - 511.023: 0.0902% ( 2) 00:29:09.525 511.023 - 514.924: 0.0927% ( 2) 00:29:09.525 514.924 - 518.825: 0.0940% ( 1) 00:29:09.525 518.825 - 522.726: 0.0979% ( 3) 00:29:09.525 522.726 - 526.627: 0.1031% ( 4) 00:29:09.525 526.627 - 530.528: 0.1082% ( 4) 00:29:09.525 530.528 - 534.429: 0.1146% ( 5) 00:29:09.525 534.429 - 538.330: 0.1224% ( 6) 00:29:09.525 538.330 - 542.231: 0.1275% ( 4) 00:29:09.525 542.231 - 546.132: 0.1340% ( 5) 00:29:09.525 546.132 - 550.033: 0.1404% ( 5) 00:29:09.525 550.033 - 553.934: 0.1481% ( 6) 00:29:09.525 553.934 - 557.835: 0.1559% ( 6) 00:29:09.525 557.835 - 561.735: 0.1623% ( 5) 00:29:09.525 561.735 - 565.636: 0.1675% ( 4) 00:29:09.525 565.636 - 569.537: 0.1687% ( 1) 00:29:09.525 628.051 - 631.952: 0.1713% ( 2) 00:29:09.525 631.952 - 635.853: 0.1739% ( 2) 00:29:09.525 635.853 - 639.754: 0.1778% ( 3) 00:29:09.525 686.566 - 690.467: 0.1791% ( 1) 00:29:09.525 690.467 - 694.367: 0.1803% ( 1) 00:29:09.525 698.268 - 702.169: 0.1816% ( 1) 00:29:09.525 721.674 - 725.575: 0.1855% ( 3) 00:29:09.525 725.575 - 729.476: 0.1868% ( 1) 00:29:09.525 729.476 - 733.377: 0.1894% ( 2) 00:29:09.525 733.377 - 737.278: 0.1919% ( 2) 00:29:09.525 737.278 - 741.179: 0.1932% ( 1) 00:29:09.525 741.179 - 745.080: 0.1971% ( 3) 00:29:09.525 745.080 - 748.981: 0.2010% ( 3) 00:29:09.525 748.981 - 752.882: 0.2035% ( 2) 00:29:09.525 752.882 - 756.783: 0.2087% ( 4) 00:29:09.525 756.783 - 760.683: 0.2151% ( 5) 00:29:09.525 760.683 - 764.584: 0.2216% ( 5) 00:29:09.525 764.584 - 768.485: 0.2293% ( 6) 00:29:09.525 768.485 - 772.386: 0.2370% ( 6) 00:29:09.525 772.386 - 776.287: 0.2460% ( 7) 00:29:09.525 776.287 - 780.188: 0.2563% ( 8) 00:29:09.525 780.188 - 784.089: 0.2654% ( 7) 00:29:09.525 784.089 - 787.990: 0.2757% ( 8) 00:29:09.525 787.990 - 791.891: 0.2885% ( 10) 00:29:09.525 791.891 - 795.792: 0.3001% ( 9) 00:29:09.525 795.792 - 799.693: 0.3130% ( 10) 00:29:09.525 799.693 - 803.594: 0.3259% ( 10) 00:29:09.525 803.594 - 807.495: 0.3375% ( 9) 00:29:09.525 807.495 - 811.396: 0.3504% ( 10) 00:29:09.525 811.396 - 815.297: 0.3607% ( 8) 00:29:09.525 815.297 - 819.198: 0.3697% ( 7) 00:29:09.525 819.198 - 823.099: 0.3787% ( 7) 00:29:09.525 823.099 - 826.999: 0.3877% ( 7) 00:29:09.525 826.999 - 830.900: 0.3942% ( 5) 00:29:09.526 830.900 - 834.801: 0.4006% ( 5) 00:29:09.526 834.801 - 838.702: 0.4045% ( 3) 00:29:09.526 838.702 - 842.603: 0.4096% ( 4) 00:29:09.526 842.603 - 846.504: 0.4135% ( 3) 00:29:09.526 846.504 - 850.405: 0.4199% ( 5) 00:29:09.526 850.405 - 854.306: 0.4238% ( 3) 00:29:09.526 854.306 - 858.207: 0.4290% ( 4) 00:29:09.526 858.207 - 862.108: 0.4315% ( 2) 00:29:09.526 862.108 - 866.009: 0.4328% ( 1) 00:29:09.526 869.910 - 873.811: 0.4341% ( 1) 00:29:09.526 873.811 - 877.712: 0.4354% ( 1) 00:29:09.526 877.712 - 881.613: 0.4367% ( 1) 00:29:09.526 881.613 - 885.514: 0.4380% ( 1) 00:29:09.526 889.415 - 893.315: 0.4393% ( 1) 00:29:09.526 893.315 - 897.216: 0.4406% ( 1) 00:29:09.526 897.216 - 901.117: 0.4418% ( 1) 00:29:09.526 905.018 - 908.919: 0.4431% ( 1) 00:29:09.526 908.919 - 912.820: 0.4444% ( 1) 00:29:09.526 912.820 - 916.721: 0.4457% ( 1) 00:29:09.526 916.721 - 920.622: 0.4470% ( 1) 00:29:09.526 924.523 - 928.424: 0.4483% ( 1) 00:29:09.526 928.424 - 932.325: 0.4496% ( 1) 00:29:09.526 932.325 - 936.226: 0.4509% ( 1) 00:29:09.526 936.226 - 940.127: 0.4521% ( 1) 00:29:09.526 944.028 - 947.929: 0.4534% ( 1) 00:29:09.526 947.929 - 951.830: 0.4547% ( 1) 00:29:09.526 951.830 - 955.731: 0.4560% ( 1) 00:29:09.526 955.731 - 959.631: 0.4573% ( 1) 00:29:09.526 963.532 - 967.433: 0.4586% ( 1) 00:29:09.526 967.433 - 971.334: 0.4599% ( 1) 00:29:09.526 971.334 - 975.235: 0.4612% ( 1) 00:29:09.526 979.136 - 983.037: 0.4625% ( 1) 00:29:09.526 983.037 - 986.938: 0.4637% ( 1) 00:29:09.526 986.938 - 990.839: 0.4650% ( 1) 00:29:09.526 990.839 - 994.740: 0.4663% ( 1) 00:29:09.526 998.641 - 1006.443: 0.4689% ( 2) 00:29:09.526 1006.443 - 1014.245: 0.4715% ( 2) 00:29:09.526 1014.245 - 1022.047: 0.4753% ( 3) 00:29:09.526 1022.047 - 1029.848: 0.4856% ( 8) 00:29:09.526 1029.848 - 1037.650: 0.4959% ( 8) 00:29:09.526 1037.650 - 1045.452: 0.5088% ( 10) 00:29:09.526 1045.452 - 1053.254: 0.5204% ( 9) 00:29:09.526 1053.254 - 1061.056: 0.5346% ( 11) 00:29:09.526 1061.056 - 1068.858: 0.5565% ( 17) 00:29:09.526 1068.858 - 1076.660: 0.5848% ( 22) 00:29:09.526 1076.660 - 1084.462: 0.6157% ( 24) 00:29:09.526 1084.462 - 1092.263: 0.6570% ( 32) 00:29:09.526 1092.263 - 1100.065: 0.7072% ( 39) 00:29:09.526 1100.065 - 1107.867: 0.7574% ( 39) 00:29:09.526 1107.867 - 1115.669: 0.8231% ( 51) 00:29:09.526 1115.669 - 1123.471: 0.8850% ( 48) 00:29:09.526 1123.471 - 1131.273: 0.9571% ( 56) 00:29:09.526 1131.273 - 1139.075: 1.0563% ( 77) 00:29:09.526 1139.075 - 1146.877: 1.1645% ( 84) 00:29:09.526 1146.877 - 1154.679: 1.2998% ( 105) 00:29:09.526 1154.679 - 1162.480: 1.4634% ( 127) 00:29:09.526 1162.480 - 1170.282: 1.6604% ( 153) 00:29:09.526 1170.282 - 1178.084: 1.8949% ( 182) 00:29:09.526 1178.084 - 1185.886: 2.1770% ( 219) 00:29:09.526 1185.886 - 1193.688: 2.4926% ( 245) 00:29:09.526 1193.688 - 1201.490: 2.8649% ( 289) 00:29:09.526 1201.490 - 1209.292: 3.2900% ( 330) 00:29:09.526 1209.292 - 1217.094: 3.7254% ( 338) 00:29:09.526 1217.094 - 1224.895: 4.2226% ( 386) 00:29:09.526 1224.895 - 1232.697: 4.7353% ( 398) 00:29:09.526 1232.697 - 1240.499: 5.2750% ( 419) 00:29:09.526 1240.499 - 1248.301: 5.8405% ( 439) 00:29:09.526 1248.301 - 1256.103: 6.4408% ( 466) 00:29:09.526 1256.103 - 1263.905: 7.0720% ( 490) 00:29:09.526 1263.905 - 1271.707: 7.7573% ( 532) 00:29:09.526 1271.707 - 1279.509: 8.4903% ( 569) 00:29:09.526 1279.509 - 1287.310: 9.2438% ( 585) 00:29:09.526 1287.310 - 1295.112: 10.0605% ( 634) 00:29:09.526 1295.112 - 1302.914: 10.8605% ( 621) 00:29:09.526 1302.914 - 1310.716: 11.7145% ( 663) 00:29:09.526 1310.716 - 1318.518: 12.5647% ( 660) 00:29:09.526 1318.518 - 1326.320: 13.4226% ( 666) 00:29:09.526 1326.320 - 1334.122: 14.2986% ( 680) 00:29:09.526 1334.122 - 1341.924: 15.1784% ( 683) 00:29:09.526 1341.924 - 1349.726: 16.0930% ( 710) 00:29:09.526 1349.726 - 1357.527: 17.0514% ( 744) 00:29:09.526 1357.527 - 1365.329: 18.0368% ( 765) 00:29:09.526 1365.329 - 1373.131: 19.0622% ( 796) 00:29:09.526 1373.131 - 1380.933: 20.1056% ( 810) 00:29:09.526 1380.933 - 1388.735: 21.1941% ( 845) 00:29:09.526 1388.735 - 1396.537: 22.3251% ( 878) 00:29:09.526 1396.537 - 1404.339: 23.4948% ( 908) 00:29:09.526 1404.339 - 1412.141: 24.6825% ( 922) 00:29:09.526 1412.141 - 1419.942: 25.8908% ( 938) 00:29:09.526 1419.942 - 1427.744: 27.0862% ( 928) 00:29:09.526 1427.744 - 1435.546: 28.2855% ( 931) 00:29:09.526 1435.546 - 1443.348: 29.4719% ( 921) 00:29:09.526 1443.348 - 1451.150: 30.6595% ( 922) 00:29:09.526 1451.150 - 1458.952: 31.8421% ( 918) 00:29:09.526 1458.952 - 1466.754: 33.0233% ( 917) 00:29:09.526 1466.754 - 1474.556: 34.2162% ( 926) 00:29:09.526 1474.556 - 1482.358: 35.3987% ( 918) 00:29:09.526 1482.358 - 1490.159: 36.5761% ( 914) 00:29:09.526 1490.159 - 1497.961: 37.7766% ( 932) 00:29:09.526 1497.961 - 1505.763: 39.0146% ( 961) 00:29:09.526 1505.763 - 1513.565: 40.1997% ( 920) 00:29:09.526 1513.565 - 1521.367: 41.4479% ( 969) 00:29:09.526 1521.367 - 1529.169: 42.7000% ( 972) 00:29:09.526 1529.169 - 1536.971: 43.9624% ( 980) 00:29:09.526 1536.971 - 1544.773: 45.2570% ( 1005) 00:29:09.526 1544.773 - 1552.574: 46.5142% ( 976) 00:29:09.526 1552.574 - 1560.376: 47.7483% ( 958) 00:29:09.526 1560.376 - 1568.178: 48.9759% ( 953) 00:29:09.526 1568.178 - 1575.980: 50.2035% ( 953) 00:29:09.526 1575.980 - 1583.782: 51.4698% ( 983) 00:29:09.526 1583.782 - 1591.584: 52.7348% ( 982) 00:29:09.526 1591.584 - 1599.386: 54.0255% ( 1002) 00:29:09.526 1599.386 - 1607.188: 55.2931% ( 984) 00:29:09.526 1607.188 - 1614.990: 56.5129% ( 947) 00:29:09.526 1614.990 - 1622.791: 57.7715% ( 977) 00:29:09.526 1622.791 - 1630.593: 59.0158% ( 966) 00:29:09.526 1630.593 - 1638.395: 60.2705% ( 974) 00:29:09.526 1638.395 - 1646.197: 61.4337% ( 903) 00:29:09.526 1646.197 - 1653.999: 62.5776% ( 888) 00:29:09.526 1653.999 - 1661.801: 63.7241% ( 890) 00:29:09.526 1661.801 - 1669.603: 64.8667% ( 887) 00:29:09.526 1669.603 - 1677.405: 65.9436% ( 836) 00:29:09.526 1677.405 - 1685.206: 67.0076% ( 826) 00:29:09.526 1685.206 - 1693.008: 68.0600% ( 817) 00:29:09.526 1693.008 - 1700.810: 69.1022% ( 809) 00:29:09.526 1700.810 - 1708.612: 70.0876% ( 765) 00:29:09.526 1708.612 - 1716.414: 71.0357% ( 736) 00:29:09.526 1716.414 - 1724.216: 71.9593% ( 717) 00:29:09.526 1724.216 - 1732.018: 72.8133% ( 663) 00:29:09.526 1732.018 - 1739.820: 73.6262% ( 631) 00:29:09.526 1739.820 - 1747.622: 74.4442% ( 635) 00:29:09.526 1747.622 - 1755.423: 75.2377% ( 616) 00:29:09.526 1755.423 - 1763.225: 75.9977% ( 590) 00:29:09.526 1763.225 - 1771.027: 76.7397% ( 576) 00:29:09.526 1771.027 - 1778.829: 77.4185% ( 527) 00:29:09.526 1778.829 - 1786.631: 78.0587% ( 497) 00:29:09.526 1786.631 - 1794.433: 78.6706% ( 475) 00:29:09.526 1794.433 - 1802.235: 79.2786% ( 472) 00:29:09.526 1802.235 - 1810.037: 79.8750% ( 463) 00:29:09.526 1810.037 - 1817.838: 80.4560% ( 451) 00:29:09.526 1817.838 - 1825.640: 81.0344% ( 449) 00:29:09.526 1825.640 - 1833.442: 81.5960% ( 436) 00:29:09.526 1833.442 - 1841.244: 82.1164% ( 404) 00:29:09.526 1841.244 - 1849.046: 82.6433% ( 409) 00:29:09.526 1849.046 - 1856.848: 83.1186% ( 369) 00:29:09.526 1856.848 - 1864.650: 83.5759% ( 355) 00:29:09.526 1864.650 - 1872.452: 84.0049% ( 333) 00:29:09.526 1872.452 - 1880.254: 84.4545% ( 349) 00:29:09.526 1880.254 - 1888.055: 84.8899% ( 338) 00:29:09.526 1888.055 - 1895.857: 85.3008% ( 319) 00:29:09.526 1895.857 - 1903.659: 85.7169% ( 323) 00:29:09.526 1903.659 - 1911.461: 86.1059% ( 302) 00:29:09.526 1911.461 - 1919.263: 86.4640% ( 278) 00:29:09.526 1919.263 - 1927.065: 86.8015% ( 262) 00:29:09.526 1927.065 - 1934.867: 87.1235% ( 250) 00:29:09.526 1934.867 - 1942.669: 87.4288% ( 237) 00:29:09.526 1942.669 - 1950.470: 87.7354% ( 238) 00:29:09.526 1950.470 - 1958.272: 88.0124% ( 215) 00:29:09.526 1958.272 - 1966.074: 88.2623% ( 194) 00:29:09.526 1966.074 - 1973.876: 88.5019% ( 186) 00:29:09.526 1973.876 - 1981.678: 88.7415% ( 186) 00:29:09.526 1981.678 - 1989.480: 88.9798% ( 185) 00:29:09.526 1989.480 - 1997.282: 89.1769% ( 153) 00:29:09.526 1997.282 - 2012.886: 89.5723% ( 307) 00:29:09.526 2012.886 - 2028.489: 89.9085% ( 261) 00:29:09.526 2028.489 - 2044.093: 90.2383% ( 256) 00:29:09.526 2044.093 - 2059.697: 90.5565% ( 247) 00:29:09.526 2059.697 - 2075.301: 90.8425% ( 222) 00:29:09.526 2075.301 - 2090.904: 91.0936% ( 195) 00:29:09.526 2090.904 - 2106.508: 91.3577% ( 205) 00:29:09.526 2106.508 - 2122.112: 91.5741% ( 168) 00:29:09.526 2122.112 - 2137.716: 91.7905% ( 168) 00:29:09.526 2137.716 - 2153.319: 92.0224% ( 180) 00:29:09.526 2153.319 - 2168.923: 92.2749% ( 196) 00:29:09.526 2168.923 - 2184.527: 92.5415% ( 207) 00:29:09.526 2184.527 - 2200.131: 92.8043% ( 204) 00:29:09.526 2200.131 - 2215.734: 93.0542% ( 194) 00:29:09.526 2215.734 - 2231.338: 93.2964% ( 188) 00:29:09.526 2231.338 - 2246.942: 93.5231% ( 176) 00:29:09.526 2246.942 - 2262.546: 93.7679% ( 190) 00:29:09.526 2262.546 - 2278.149: 93.9920% ( 174) 00:29:09.526 2278.149 - 2293.753: 94.2123% ( 171) 00:29:09.526 2293.753 - 2309.357: 94.4403% ( 177) 00:29:09.526 2309.357 - 2324.961: 94.6425% ( 157) 00:29:09.526 2324.961 - 2340.565: 94.8306% ( 146) 00:29:09.526 2340.565 - 2356.168: 95.0084% ( 138) 00:29:09.526 2356.168 - 2371.772: 95.1630% ( 120) 00:29:09.526 2371.772 - 2387.376: 95.3047% ( 110) 00:29:09.526 2387.376 - 2402.980: 95.4425% ( 107) 00:29:09.526 2402.980 - 2418.583: 95.5790% ( 106) 00:29:09.526 2418.583 - 2434.187: 95.7027% ( 96) 00:29:09.527 2434.187 - 2449.791: 95.8276% ( 97) 00:29:09.527 2449.791 - 2465.395: 95.9577% ( 101) 00:29:09.527 2465.395 - 2480.998: 96.0724% ( 89) 00:29:09.527 2480.998 - 2496.602: 96.1870% ( 89) 00:29:09.527 2496.602 - 2512.206: 96.2927% ( 82) 00:29:09.527 2512.206 - 2527.810: 96.4022% ( 85) 00:29:09.527 2527.810 - 2543.413: 96.5091% ( 83) 00:29:09.527 2543.413 - 2559.017: 96.6199% ( 86) 00:29:09.527 2559.017 - 2574.621: 96.7023% ( 64) 00:29:09.527 2574.621 - 2590.225: 96.7744% ( 56) 00:29:09.527 2590.225 - 2605.829: 96.8414% ( 52) 00:29:09.527 2605.829 - 2621.432: 96.8968% ( 43) 00:29:09.527 2621.432 - 2637.036: 96.9496% ( 41) 00:29:09.527 2637.036 - 2652.640: 96.9883% ( 30) 00:29:09.527 2652.640 - 2668.244: 97.0218% ( 26) 00:29:09.527 2668.244 - 2683.847: 97.0604% ( 30) 00:29:09.527 2683.847 - 2699.451: 97.0875% ( 21) 00:29:09.527 2699.451 - 2715.055: 97.1132% ( 20) 00:29:09.527 2715.055 - 2730.659: 97.1338% ( 16) 00:29:09.527 2730.659 - 2746.262: 97.1480% ( 11) 00:29:09.527 2746.262 - 2761.866: 97.1686% ( 16) 00:29:09.527 2761.866 - 2777.470: 97.1970% ( 22) 00:29:09.527 2777.470 - 2793.074: 97.2305% ( 26) 00:29:09.527 2793.074 - 2808.677: 97.2627% ( 25) 00:29:09.527 2808.677 - 2824.281: 97.2936% ( 24) 00:29:09.527 2824.281 - 2839.885: 97.3219% ( 22) 00:29:09.527 2839.885 - 2855.489: 97.3593% ( 29) 00:29:09.527 2855.489 - 2871.093: 97.3928% ( 26) 00:29:09.527 2871.093 - 2886.696: 97.4288% ( 28) 00:29:09.527 2886.696 - 2902.300: 97.4662% ( 29) 00:29:09.527 2902.300 - 2917.904: 97.5100% ( 34) 00:29:09.527 2917.904 - 2933.508: 97.5641% ( 42) 00:29:09.527 2933.508 - 2949.111: 97.6195% ( 43) 00:29:09.527 2949.111 - 2964.715: 97.6800% ( 47) 00:29:09.527 2964.715 - 2980.319: 97.7470% ( 52) 00:29:09.527 2980.319 - 2995.923: 97.8037% ( 44) 00:29:09.527 2995.923 - 3011.526: 97.8565% ( 41) 00:29:09.527 3011.526 - 3027.130: 97.9080% ( 40) 00:29:09.527 3027.130 - 3042.734: 97.9544% ( 36) 00:29:09.527 3042.734 - 3058.338: 97.9982% ( 34) 00:29:09.527 3058.338 - 3073.941: 98.0536% ( 43) 00:29:09.527 3073.941 - 3089.545: 98.1000% ( 36) 00:29:09.527 3089.545 - 3105.149: 98.1566% ( 44) 00:29:09.527 3105.149 - 3120.753: 98.2030% ( 36) 00:29:09.527 3120.753 - 3136.356: 98.2545% ( 40) 00:29:09.527 3136.356 - 3151.960: 98.3164% ( 48) 00:29:09.527 3151.960 - 3167.564: 98.3743% ( 45) 00:29:09.527 3167.564 - 3183.168: 98.4310% ( 44) 00:29:09.527 3183.168 - 3198.772: 98.4941% ( 49) 00:29:09.527 3198.772 - 3214.375: 98.5482% ( 42) 00:29:09.527 3214.375 - 3229.979: 98.6152% ( 52) 00:29:09.527 3229.979 - 3245.583: 98.7002% ( 66) 00:29:09.527 3245.583 - 3261.187: 98.7724% ( 56) 00:29:09.527 3261.187 - 3276.790: 98.8458% ( 57) 00:29:09.527 3276.790 - 3292.394: 98.8922% ( 36) 00:29:09.527 3292.394 - 3307.998: 98.9411% ( 38) 00:29:09.527 3307.998 - 3323.602: 98.9849% ( 34) 00:29:09.527 3323.602 - 3339.205: 99.0339% ( 38) 00:29:09.527 3339.205 - 3354.809: 99.0828% ( 38) 00:29:09.527 3354.809 - 3370.413: 99.1253% ( 33) 00:29:09.527 3370.413 - 3386.017: 99.1588% ( 26) 00:29:09.527 3386.017 - 3401.620: 99.1769% ( 14) 00:29:09.527 3401.620 - 3417.224: 99.1975% ( 16) 00:29:09.527 3417.224 - 3432.828: 99.2168% ( 15) 00:29:09.527 3432.828 - 3448.432: 99.2413% ( 19) 00:29:09.527 3448.432 - 3464.036: 99.2619% ( 16) 00:29:09.527 3464.036 - 3479.639: 99.2812% ( 15) 00:29:09.527 3479.639 - 3495.243: 99.3005% ( 15) 00:29:09.527 3495.243 - 3510.847: 99.3160% ( 12) 00:29:09.527 3510.847 - 3526.451: 99.3353% ( 15) 00:29:09.527 3526.451 - 3542.054: 99.3469% ( 9) 00:29:09.527 3542.054 - 3557.658: 99.3585% ( 9) 00:29:09.527 3557.658 - 3573.262: 99.3662% ( 6) 00:29:09.527 3573.262 - 3588.866: 99.3752% ( 7) 00:29:09.527 3588.866 - 3604.469: 99.3855% ( 8) 00:29:09.527 3604.469 - 3620.073: 99.3946% ( 7) 00:29:09.527 3620.073 - 3635.677: 99.4036% ( 7) 00:29:09.527 3635.677 - 3651.281: 99.4152% ( 9) 00:29:09.527 3651.281 - 3666.884: 99.4281% ( 10) 00:29:09.527 3666.884 - 3682.488: 99.4396% ( 9) 00:29:09.527 3682.488 - 3698.092: 99.4525% ( 10) 00:29:09.527 3698.092 - 3713.696: 99.4654% ( 10) 00:29:09.527 3713.696 - 3729.300: 99.4783% ( 10) 00:29:09.527 3729.300 - 3744.903: 99.4899% ( 9) 00:29:09.527 3744.903 - 3760.507: 99.4976% ( 6) 00:29:09.527 3760.507 - 3776.111: 99.5041% ( 5) 00:29:09.527 3776.111 - 3791.715: 99.5118% ( 6) 00:29:09.527 3791.715 - 3807.318: 99.5195% ( 6) 00:29:09.527 3807.318 - 3822.922: 99.5272% ( 6) 00:29:09.527 3822.922 - 3838.526: 99.5324% ( 4) 00:29:09.527 3838.526 - 3854.130: 99.5388% ( 5) 00:29:09.527 3854.130 - 3869.733: 99.5479% ( 7) 00:29:09.527 3869.733 - 3885.337: 99.5607% ( 10) 00:29:09.527 3885.337 - 3900.941: 99.5736% ( 10) 00:29:09.527 3900.941 - 3916.545: 99.6007% ( 21) 00:29:09.527 3916.545 - 3932.148: 99.6445% ( 34) 00:29:09.527 3932.148 - 3947.752: 99.6818% ( 29) 00:29:09.527 3947.752 - 3963.356: 99.7011% ( 15) 00:29:09.527 3963.356 - 3978.960: 99.7192% ( 14) 00:29:09.527 3978.960 - 3994.563: 99.7359% ( 13) 00:29:09.527 3994.563 - 4025.771: 99.7552% ( 15) 00:29:09.527 4025.771 - 4056.979: 99.7759% ( 16) 00:29:09.527 4056.979 - 4088.186: 99.7913% ( 12) 00:29:09.527 4088.186 - 4119.394: 99.7939% ( 2) 00:29:09.527 4181.809 - 4213.016: 99.7952% ( 1) 00:29:09.527 4337.846 - 4369.054: 99.8016% ( 5) 00:29:09.527 4369.054 - 4400.261: 99.8287% ( 21) 00:29:09.527 4400.261 - 4431.469: 99.8351% ( 5) 00:29:09.527 4743.544 - 4774.752: 99.8390% ( 3) 00:29:09.527 4774.752 - 4805.959: 99.8609% ( 17) 00:29:09.527 4805.959 - 4837.167: 99.8828% ( 17) 00:29:09.527 4837.167 - 4868.374: 99.9047% ( 17) 00:29:09.527 4868.374 - 4899.582: 99.9253% ( 16) 00:29:09.527 4899.582 - 4930.789: 99.9459% ( 16) 00:29:09.527 4930.789 - 4961.997: 99.9665% ( 16) 00:29:09.527 4961.997 - 4993.204: 99.9858% ( 15) 00:29:09.527 4993.204 - 5024.412: 99.9961% ( 8) 00:29:09.527 5024.412 - 5055.619: 100.0000% ( 3) 00:29:09.527 00:29:09.527 07:42:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:29:10.094 EAL: TSC is not safe to use in SMP mode 00:29:10.094 EAL: TSC is not invariant 00:29:10.094 [2024-05-16 07:42:03.528842] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:11.028 Initializing NVMe Controllers 00:29:11.028 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:11.028 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:29:11.028 Initialization complete. Launching workers. 00:29:11.028 ======================================================== 00:29:11.028 Latency(us) 00:29:11.028 Device Information : IOPS MiB/s Average min max 00:29:11.028 PCIE (0000:00:10.0) NSID 1 from core 0: 65471.39 767.24 1955.39 335.90 7327.18 00:29:11.028 ======================================================== 00:29:11.028 Total : 65471.39 767.24 1955.39 335.90 7327.18 00:29:11.028 00:29:11.028 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:11.028 ================================================================================= 00:29:11.028 1.00000% : 1248.301us 00:29:11.028 10.00000% : 1646.197us 00:29:11.028 25.00000% : 1778.829us 00:29:11.028 50.00000% : 1919.263us 00:29:11.028 75.00000% : 2059.697us 00:29:11.028 90.00000% : 2246.942us 00:29:11.028 95.00000% : 2449.791us 00:29:11.028 98.00000% : 2933.508us 00:29:11.028 99.00000% : 3417.224us 00:29:11.028 99.50000% : 4337.846us 00:29:11.028 99.90000% : 7240.146us 00:29:11.028 99.99000% : 7333.769us 00:29:11.028 99.99900% : 7333.769us 00:29:11.028 99.99990% : 7333.769us 00:29:11.028 99.99999% : 7333.769us 00:29:11.028 00:29:11.028 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:29:11.028 ============================================================================== 00:29:11.028 Range in us Cumulative IO count 00:29:11.028 335.481 - 337.431: 0.0061% ( 4) 00:29:11.028 337.431 - 339.382: 0.0092% ( 2) 00:29:11.028 339.382 - 341.332: 0.0107% ( 1) 00:29:11.028 341.332 - 343.283: 0.0122% ( 1) 00:29:11.028 343.283 - 345.233: 0.0137% ( 1) 00:29:11.028 347.184 - 349.134: 0.0153% ( 1) 00:29:11.028 397.896 - 399.846: 0.0168% ( 1) 00:29:11.028 403.747 - 405.698: 0.0214% ( 3) 00:29:11.028 885.514 - 889.415: 0.0229% ( 1) 00:29:11.028 889.415 - 893.315: 0.0244% ( 1) 00:29:11.028 893.315 - 897.216: 0.0290% ( 3) 00:29:11.028 897.216 - 901.117: 0.0336% ( 3) 00:29:11.028 901.117 - 905.018: 0.0367% ( 2) 00:29:11.028 905.018 - 908.919: 0.0428% ( 4) 00:29:11.028 912.820 - 916.721: 0.0443% ( 1) 00:29:11.028 916.721 - 920.622: 0.0458% ( 1) 00:29:11.028 920.622 - 924.523: 0.0473% ( 1) 00:29:11.028 928.424 - 932.325: 0.0489% ( 1) 00:29:11.028 932.325 - 936.226: 0.0550% ( 4) 00:29:11.028 936.226 - 940.127: 0.0580% ( 2) 00:29:11.028 940.127 - 944.028: 0.0641% ( 4) 00:29:11.028 944.028 - 947.929: 0.0718% ( 5) 00:29:11.028 947.929 - 951.830: 0.0764% ( 3) 00:29:11.028 951.830 - 955.731: 0.0840% ( 5) 00:29:11.028 955.731 - 959.631: 0.0962% ( 8) 00:29:11.028 959.631 - 963.532: 0.1069% ( 7) 00:29:11.028 963.532 - 967.433: 0.1207% ( 9) 00:29:11.028 967.433 - 971.334: 0.1252% ( 3) 00:29:11.028 971.334 - 975.235: 0.1313% ( 4) 00:29:11.028 975.235 - 979.136: 0.1375% ( 4) 00:29:11.028 979.136 - 983.037: 0.1420% ( 3) 00:29:11.028 983.037 - 986.938: 0.1512% ( 6) 00:29:11.028 986.938 - 990.839: 0.1604% ( 6) 00:29:11.028 990.839 - 994.740: 0.1695% ( 6) 00:29:11.028 994.740 - 998.641: 0.1772% ( 5) 00:29:11.028 998.641 - 1006.443: 0.1985% ( 14) 00:29:11.028 1006.443 - 1014.245: 0.2138% ( 10) 00:29:11.028 1014.245 - 1022.047: 0.2214% ( 5) 00:29:11.028 1022.047 - 1029.848: 0.2489% ( 18) 00:29:11.028 1029.848 - 1037.650: 0.2642% ( 10) 00:29:11.028 1037.650 - 1045.452: 0.2795% ( 10) 00:29:11.028 1045.452 - 1053.254: 0.3207% ( 27) 00:29:11.028 1053.254 - 1061.056: 0.3528% ( 21) 00:29:11.028 1061.056 - 1068.858: 0.3726% ( 13) 00:29:11.028 1068.858 - 1076.660: 0.3971% ( 16) 00:29:11.028 1076.660 - 1084.462: 0.4230% ( 17) 00:29:11.028 1084.462 - 1092.263: 0.4628% ( 26) 00:29:11.028 1092.263 - 1100.065: 0.5040% ( 27) 00:29:11.028 1100.065 - 1107.867: 0.5391% ( 23) 00:29:11.028 1107.867 - 1115.669: 0.5635% ( 16) 00:29:11.028 1115.669 - 1123.471: 0.5849% ( 14) 00:29:11.028 1123.471 - 1131.273: 0.6078% ( 15) 00:29:11.028 1131.273 - 1139.075: 0.6369% ( 19) 00:29:11.028 1139.075 - 1146.877: 0.6705% ( 22) 00:29:11.028 1146.877 - 1154.679: 0.7071% ( 24) 00:29:11.028 1154.679 - 1162.480: 0.7438% ( 24) 00:29:11.028 1162.480 - 1170.282: 0.7667% ( 15) 00:29:11.028 1170.282 - 1178.084: 0.7850% ( 12) 00:29:11.028 1178.084 - 1185.886: 0.8018% ( 11) 00:29:11.028 1185.886 - 1193.688: 0.8262% ( 16) 00:29:11.028 1193.688 - 1201.490: 0.8522% ( 17) 00:29:11.028 1201.490 - 1209.292: 0.8812% ( 19) 00:29:11.028 1209.292 - 1217.094: 0.9072% ( 17) 00:29:11.028 1217.094 - 1224.895: 0.9316% ( 16) 00:29:11.028 1224.895 - 1232.697: 0.9515% ( 13) 00:29:11.028 1232.697 - 1240.499: 0.9790% ( 18) 00:29:11.028 1240.499 - 1248.301: 1.0034% ( 16) 00:29:11.028 1248.301 - 1256.103: 1.0538% ( 33) 00:29:11.028 1256.103 - 1263.905: 1.0706% ( 11) 00:29:11.028 1263.905 - 1271.707: 1.1057% ( 23) 00:29:11.028 1271.707 - 1279.509: 1.1363% ( 20) 00:29:11.028 1279.509 - 1287.310: 1.1622% ( 17) 00:29:11.028 1287.310 - 1295.112: 1.1943% ( 21) 00:29:11.028 1295.112 - 1302.914: 1.2355% ( 27) 00:29:11.028 1302.914 - 1310.716: 1.2798% ( 29) 00:29:11.028 1310.716 - 1318.518: 1.3272% ( 31) 00:29:11.028 1318.518 - 1326.320: 1.3974% ( 46) 00:29:11.028 1326.320 - 1334.122: 1.4967% ( 65) 00:29:11.028 1334.122 - 1341.924: 1.5624% ( 43) 00:29:11.028 1341.924 - 1349.726: 1.6219% ( 39) 00:29:11.028 1349.726 - 1357.527: 1.6891% ( 44) 00:29:11.028 1357.527 - 1365.329: 1.7624% ( 48) 00:29:11.028 1365.329 - 1373.131: 1.8434% ( 53) 00:29:11.028 1373.131 - 1380.933: 1.9228% ( 52) 00:29:11.028 1380.933 - 1388.735: 2.0694% ( 96) 00:29:11.028 1388.735 - 1396.537: 2.1839% ( 75) 00:29:11.028 1396.537 - 1404.339: 2.2908% ( 70) 00:29:11.028 1404.339 - 1412.141: 2.3703% ( 52) 00:29:11.028 1412.141 - 1419.942: 2.4741% ( 68) 00:29:11.028 1419.942 - 1427.744: 2.5780% ( 68) 00:29:11.028 1427.744 - 1435.546: 2.7108% ( 87) 00:29:11.028 1435.546 - 1443.348: 2.8116% ( 66) 00:29:11.028 1443.348 - 1451.150: 2.8865% ( 49) 00:29:11.028 1451.150 - 1458.952: 2.9552% ( 45) 00:29:11.028 1458.952 - 1466.754: 3.0300% ( 49) 00:29:11.028 1466.754 - 1474.556: 3.1385% ( 71) 00:29:11.028 1474.556 - 1482.358: 3.2713% ( 87) 00:29:11.028 1482.358 - 1490.159: 3.4149% ( 94) 00:29:11.028 1490.159 - 1497.961: 3.5737% ( 104) 00:29:11.028 1497.961 - 1505.763: 3.7448% ( 112) 00:29:11.028 1505.763 - 1513.565: 3.8792% ( 88) 00:29:11.028 1513.565 - 1521.367: 4.0487% ( 111) 00:29:11.028 1521.367 - 1529.169: 4.2029% ( 101) 00:29:11.028 1529.169 - 1536.971: 4.4213% ( 143) 00:29:11.028 1536.971 - 1544.773: 4.6245% ( 133) 00:29:11.028 1544.773 - 1552.574: 4.8994% ( 180) 00:29:11.028 1552.574 - 1560.376: 5.1513% ( 165) 00:29:11.028 1560.376 - 1568.178: 5.5240% ( 244) 00:29:11.028 1568.178 - 1575.980: 5.9425% ( 274) 00:29:11.028 1575.980 - 1583.782: 6.2983% ( 233) 00:29:11.028 1583.782 - 1591.584: 6.7259% ( 280) 00:29:11.028 1591.584 - 1599.386: 7.1413% ( 272) 00:29:11.028 1599.386 - 1607.188: 7.6010% ( 301) 00:29:11.028 1607.188 - 1614.990: 8.1188% ( 339) 00:29:11.028 1614.990 - 1622.791: 8.6945% ( 377) 00:29:11.028 1622.791 - 1630.593: 9.3192% ( 409) 00:29:11.028 1630.593 - 1638.395: 9.9896% ( 439) 00:29:11.028 1638.395 - 1646.197: 10.7105% ( 472) 00:29:11.028 1646.197 - 1653.999: 11.4435% ( 480) 00:29:11.028 1653.999 - 1661.801: 12.1873% ( 487) 00:29:11.028 1661.801 - 1669.603: 12.8898% ( 460) 00:29:11.028 1669.603 - 1677.405: 13.7191% ( 543) 00:29:11.028 1677.405 - 1685.206: 14.4766% ( 496) 00:29:11.028 1685.206 - 1693.008: 15.2402% ( 500) 00:29:11.028 1693.008 - 1700.810: 16.0466% ( 528) 00:29:11.028 1700.810 - 1708.612: 16.9141% ( 568) 00:29:11.028 1708.612 - 1716.414: 17.7632% ( 556) 00:29:11.028 1716.414 - 1724.216: 18.6154% ( 558) 00:29:11.028 1724.216 - 1732.018: 19.4844% ( 569) 00:29:11.028 1732.018 - 1739.820: 20.3198% ( 547) 00:29:11.028 1739.820 - 1747.622: 21.1369% ( 535) 00:29:11.028 1747.622 - 1755.423: 22.0715% ( 612) 00:29:11.028 1755.423 - 1763.225: 23.0703% ( 654) 00:29:11.028 1763.225 - 1771.027: 24.1440% ( 703) 00:29:11.028 1771.027 - 1778.829: 25.2711% ( 738) 00:29:11.028 1778.829 - 1786.631: 26.4913% ( 799) 00:29:11.028 1786.631 - 1794.433: 27.6917% ( 786) 00:29:11.028 1794.433 - 1802.235: 28.7975% ( 724) 00:29:11.028 1802.235 - 1810.037: 29.9627% ( 763) 00:29:11.028 1810.037 - 1817.838: 31.1188% ( 757) 00:29:11.028 1817.838 - 1825.640: 32.3773% ( 824) 00:29:11.028 1825.640 - 1833.442: 33.8068% ( 936) 00:29:11.028 1833.442 - 1841.244: 35.0942% ( 843) 00:29:11.028 1841.244 - 1849.046: 36.3970% ( 853) 00:29:11.028 1849.046 - 1856.848: 37.8142% ( 928) 00:29:11.028 1856.848 - 1864.650: 39.4025% ( 1040) 00:29:11.028 1864.650 - 1872.452: 40.9359% ( 1004) 00:29:11.028 1872.452 - 1880.254: 42.5044% ( 1027) 00:29:11.028 1880.254 - 1888.055: 44.0041% ( 982) 00:29:11.028 1888.055 - 1895.857: 45.5924% ( 1040) 00:29:11.028 1895.857 - 1903.659: 47.2601% ( 1092) 00:29:11.028 1903.659 - 1911.461: 48.9783% ( 1125) 00:29:11.028 1911.461 - 1919.263: 50.6277% ( 1080) 00:29:11.028 1919.263 - 1927.065: 52.2007% ( 1030) 00:29:11.029 1927.065 - 1934.867: 53.7570% ( 1019) 00:29:11.029 1934.867 - 1942.669: 55.3285% ( 1029) 00:29:11.029 1942.669 - 1950.470: 56.9077% ( 1034) 00:29:11.029 1950.470 - 1958.272: 58.4761% ( 1027) 00:29:11.029 1958.272 - 1966.074: 60.0614% ( 1038) 00:29:11.029 1966.074 - 1973.876: 61.6772% ( 1058) 00:29:11.029 1973.876 - 1981.678: 63.2655% ( 1040) 00:29:11.029 1981.678 - 1989.480: 64.8355% ( 1028) 00:29:11.029 1989.480 - 1997.282: 66.4025% ( 1026) 00:29:11.029 1997.282 - 2012.886: 69.5501% ( 2061) 00:29:11.029 2012.886 - 2028.489: 72.2594% ( 1774) 00:29:11.029 2028.489 - 2044.093: 74.6434% ( 1561) 00:29:11.029 2044.093 - 2059.697: 76.8151% ( 1422) 00:29:11.029 2059.697 - 2075.301: 78.6906% ( 1228) 00:29:11.029 2075.301 - 2090.904: 80.3323% ( 1075) 00:29:11.029 2090.904 - 2106.508: 81.8519% ( 995) 00:29:11.029 2106.508 - 2122.112: 83.2356% ( 906) 00:29:11.029 2122.112 - 2137.716: 84.4925% ( 823) 00:29:11.029 2137.716 - 2153.319: 85.5738% ( 708) 00:29:11.029 2153.319 - 2168.923: 86.5940% ( 668) 00:29:11.029 2168.923 - 2184.527: 87.5347% ( 616) 00:29:11.029 2184.527 - 2200.131: 88.3304% ( 521) 00:29:11.029 2200.131 - 2215.734: 89.0604% ( 478) 00:29:11.029 2215.734 - 2231.338: 89.6713% ( 400) 00:29:11.029 2231.338 - 2246.942: 90.2334% ( 368) 00:29:11.029 2246.942 - 2262.546: 90.6946% ( 302) 00:29:11.029 2262.546 - 2278.149: 91.1772% ( 316) 00:29:11.029 2278.149 - 2293.753: 91.6628% ( 318) 00:29:11.029 2293.753 - 2309.357: 92.0905% ( 280) 00:29:11.029 2309.357 - 2324.961: 92.5334% ( 290) 00:29:11.029 2324.961 - 2340.565: 92.9243% ( 256) 00:29:11.029 2340.565 - 2356.168: 93.3092% ( 252) 00:29:11.029 2356.168 - 2371.772: 93.6864% ( 247) 00:29:11.029 2371.772 - 2387.376: 94.0087% ( 211) 00:29:11.029 2387.376 - 2402.980: 94.2790% ( 177) 00:29:11.029 2402.980 - 2418.583: 94.5692% ( 190) 00:29:11.029 2418.583 - 2434.187: 94.8135% ( 160) 00:29:11.029 2434.187 - 2449.791: 95.0334% ( 144) 00:29:11.029 2449.791 - 2465.395: 95.2396% ( 135) 00:29:11.029 2465.395 - 2480.998: 95.4427% ( 133) 00:29:11.029 2480.998 - 2496.602: 95.5878% ( 95) 00:29:11.029 2496.602 - 2512.206: 95.7207% ( 87) 00:29:11.029 2512.206 - 2527.810: 95.8627% ( 93) 00:29:11.029 2527.810 - 2543.413: 96.0643% ( 132) 00:29:11.029 2543.413 - 2559.017: 96.2919% ( 149) 00:29:11.029 2559.017 - 2574.621: 96.4874% ( 128) 00:29:11.029 2574.621 - 2590.225: 96.6767% ( 124) 00:29:11.029 2590.225 - 2605.829: 96.8157% ( 91) 00:29:11.029 2605.829 - 2621.432: 96.9364% ( 79) 00:29:11.029 2621.432 - 2637.036: 97.0250% ( 58) 00:29:11.029 2637.036 - 2652.640: 97.1059% ( 53) 00:29:11.029 2652.640 - 2668.244: 97.1639% ( 38) 00:29:11.029 2668.244 - 2683.847: 97.2036% ( 26) 00:29:11.029 2683.847 - 2699.451: 97.2250% ( 14) 00:29:11.029 2699.451 - 2715.055: 97.2433% ( 12) 00:29:11.029 2715.055 - 2730.659: 97.2617% ( 12) 00:29:11.029 2730.659 - 2746.262: 97.2815% ( 13) 00:29:11.029 2746.262 - 2761.866: 97.2999% ( 12) 00:29:11.029 2761.866 - 2777.470: 97.3335% ( 22) 00:29:11.029 2777.470 - 2793.074: 97.3869% ( 35) 00:29:11.029 2793.074 - 2808.677: 97.4419% ( 36) 00:29:11.029 2808.677 - 2824.281: 97.5076% ( 43) 00:29:11.029 2824.281 - 2839.885: 97.5885% ( 53) 00:29:11.029 2839.885 - 2855.489: 97.6771% ( 58) 00:29:11.029 2855.489 - 2871.093: 97.7580% ( 53) 00:29:11.029 2871.093 - 2886.696: 97.8313% ( 48) 00:29:11.029 2886.696 - 2902.300: 97.8909% ( 39) 00:29:11.029 2902.300 - 2917.904: 97.9489% ( 38) 00:29:11.029 2917.904 - 2933.508: 98.0024% ( 35) 00:29:11.029 2933.508 - 2949.111: 98.0665% ( 42) 00:29:11.029 2949.111 - 2964.715: 98.1353% ( 45) 00:29:11.029 2964.715 - 2980.319: 98.1887% ( 35) 00:29:11.029 2980.319 - 2995.923: 98.2330% ( 29) 00:29:11.029 2995.923 - 3011.526: 98.2681% ( 23) 00:29:11.029 3011.526 - 3027.130: 98.3048% ( 24) 00:29:11.029 3027.130 - 3042.734: 98.3353% ( 20) 00:29:11.029 3042.734 - 3058.338: 98.3689% ( 22) 00:29:11.029 3058.338 - 3073.941: 98.4163% ( 31) 00:29:11.029 3073.941 - 3089.545: 98.4606% ( 29) 00:29:11.029 3089.545 - 3105.149: 98.4987% ( 25) 00:29:11.029 3105.149 - 3120.753: 98.5308% ( 21) 00:29:11.029 3120.753 - 3136.356: 98.5598% ( 19) 00:29:11.029 3136.356 - 3151.960: 98.5904% ( 20) 00:29:11.029 3151.960 - 3167.564: 98.6240% ( 22) 00:29:11.029 3167.564 - 3183.168: 98.6576% ( 22) 00:29:11.029 3183.168 - 3198.772: 98.6820% ( 16) 00:29:11.029 3198.772 - 3214.375: 98.7080% ( 17) 00:29:11.029 3214.375 - 3229.979: 98.7370% ( 19) 00:29:11.029 3229.979 - 3245.583: 98.7660% ( 19) 00:29:11.029 3245.583 - 3261.187: 98.7920% ( 17) 00:29:11.029 3261.187 - 3276.790: 98.8179% ( 17) 00:29:11.029 3276.790 - 3292.394: 98.8393% ( 14) 00:29:11.029 3292.394 - 3307.998: 98.8683% ( 19) 00:29:11.029 3307.998 - 3323.602: 98.8912% ( 15) 00:29:11.029 3323.602 - 3339.205: 98.9080% ( 11) 00:29:11.029 3339.205 - 3354.809: 98.9233% ( 10) 00:29:11.029 3354.809 - 3370.413: 98.9355% ( 8) 00:29:11.029 3370.413 - 3386.017: 98.9600% ( 16) 00:29:11.029 3386.017 - 3401.620: 98.9813% ( 14) 00:29:11.029 3401.620 - 3417.224: 99.0012% ( 13) 00:29:11.029 3417.224 - 3432.828: 99.0119% ( 7) 00:29:11.029 3432.828 - 3448.432: 99.0180% ( 4) 00:29:11.029 3448.432 - 3464.036: 99.0195% ( 1) 00:29:11.029 3495.243 - 3510.847: 99.0287% ( 6) 00:29:11.029 3510.847 - 3526.451: 99.0562% ( 18) 00:29:11.029 3526.451 - 3542.054: 99.0852% ( 19) 00:29:11.029 3542.054 - 3557.658: 99.1112% ( 17) 00:29:11.029 3557.658 - 3573.262: 99.1295% ( 12) 00:29:11.029 3573.262 - 3588.866: 99.1463% ( 11) 00:29:11.029 3588.866 - 3604.469: 99.1616% ( 10) 00:29:11.029 3604.469 - 3620.073: 99.1738% ( 8) 00:29:11.029 3620.073 - 3635.677: 99.1845% ( 7) 00:29:11.029 3635.677 - 3651.281: 99.1982% ( 9) 00:29:11.029 3651.281 - 3666.884: 99.2104% ( 8) 00:29:11.029 3666.884 - 3682.488: 99.2181% ( 5) 00:29:11.029 4119.394 - 4150.601: 99.2333% ( 10) 00:29:11.029 4150.601 - 4181.809: 99.2532% ( 13) 00:29:11.029 4181.809 - 4213.016: 99.2715% ( 12) 00:29:11.029 4213.016 - 4244.224: 99.3082% ( 24) 00:29:11.029 4244.224 - 4275.431: 99.3723% ( 42) 00:29:11.029 4275.431 - 4306.639: 99.4655% ( 61) 00:29:11.029 4306.639 - 4337.846: 99.5586% ( 61) 00:29:11.029 4337.846 - 4369.054: 99.5876% ( 19) 00:29:11.029 4369.054 - 4400.261: 99.6289% ( 27) 00:29:11.029 4400.261 - 4431.469: 99.6762% ( 31) 00:29:11.029 4431.469 - 4462.676: 99.7236% ( 31) 00:29:11.029 4462.676 - 4493.884: 99.7526% ( 19) 00:29:11.029 4493.884 - 4525.091: 99.7755% ( 15) 00:29:11.029 4525.091 - 4556.299: 99.7938% ( 12) 00:29:11.029 4556.299 - 4587.506: 99.8045% ( 7) 00:29:11.029 7052.901 - 7084.109: 99.8106% ( 4) 00:29:11.029 7084.109 - 7115.316: 99.8320% ( 14) 00:29:11.029 7115.316 - 7146.524: 99.8503% ( 12) 00:29:11.029 7146.524 - 7177.731: 99.8732% ( 15) 00:29:11.029 7177.731 - 7208.939: 99.8977% ( 16) 00:29:11.029 7208.939 - 7240.146: 99.9221% ( 16) 00:29:11.029 7240.146 - 7271.354: 99.9603% ( 25) 00:29:11.029 7271.354 - 7302.561: 99.9878% ( 18) 00:29:11.029 7302.561 - 7333.769: 100.0000% ( 8) 00:29:11.029 00:29:11.595 07:42:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:29:11.595 00:29:11.595 real 0m3.576s 00:29:11.595 user 0m2.500s 00:29:11.595 sys 0m1.075s 00:29:11.595 ************************************ 00:29:11.595 END TEST nvme_perf 00:29:11.595 ************************************ 00:29:11.595 07:42:05 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:11.595 07:42:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:29:11.595 07:42:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:11.595 07:42:05 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:29:11.595 07:42:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:11.595 07:42:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:11.595 ************************************ 00:29:11.595 START TEST nvme_hello_world 00:29:11.595 ************************************ 00:29:11.595 07:42:05 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:12.163 EAL: TSC is not safe to use in SMP mode 00:29:12.163 EAL: TSC is not invariant 00:29:12.163 [2024-05-16 07:42:05.632563] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:12.163 Initializing NVMe Controllers 00:29:12.163 Attaching to 0000:00:10.0 00:29:12.163 Attached to 0000:00:10.0 00:29:12.163 Namespace ID: 1 size: 5GB 00:29:12.163 Initialization complete. 00:29:12.163 INFO: using host memory buffer for IO 00:29:12.163 Hello world! 00:29:12.163 00:29:12.163 real 0m0.578s 00:29:12.163 user 0m0.009s 00:29:12.163 sys 0m0.568s 00:29:12.163 07:42:05 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:12.163 07:42:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:12.163 ************************************ 00:29:12.163 END TEST nvme_hello_world 00:29:12.163 ************************************ 00:29:12.422 07:42:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:12.422 07:42:05 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:12.422 07:42:05 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:12.422 07:42:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:12.422 ************************************ 00:29:12.422 START TEST nvme_sgl 00:29:12.422 ************************************ 00:29:12.422 07:42:05 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:12.680 EAL: TSC is not safe to use in SMP mode 00:29:12.680 EAL: TSC is not invariant 00:29:12.680 [2024-05-16 07:42:06.222726] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:12.937 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:29:12.937 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:29:12.937 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:29:12.937 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:29:12.937 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:29:12.937 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:29:12.937 NVMe Readv/Writev Request test 00:29:12.937 Attaching to 0000:00:10.0 00:29:12.937 Attached to 0000:00:10.0 00:29:12.937 0000:00:10.0: build_io_request_2 test passed 00:29:12.937 0000:00:10.0: build_io_request_4 test passed 00:29:12.937 0000:00:10.0: build_io_request_5 test passed 00:29:12.937 0000:00:10.0: build_io_request_6 test passed 00:29:12.937 0000:00:10.0: build_io_request_7 test passed 00:29:12.937 0000:00:10.0: build_io_request_10 test passed 00:29:12.937 Cleaning up... 00:29:12.937 00:29:12.937 real 0m0.560s 00:29:12.937 user 0m0.014s 00:29:12.937 sys 0m0.547s 00:29:12.937 07:42:06 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:12.937 07:42:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:29:12.937 ************************************ 00:29:12.937 END TEST nvme_sgl 00:29:12.937 ************************************ 00:29:12.938 07:42:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:12.938 07:42:06 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:12.938 07:42:06 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:12.938 07:42:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:12.938 ************************************ 00:29:12.938 START TEST nvme_e2edp 00:29:12.938 ************************************ 00:29:12.938 07:42:06 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:13.504 EAL: TSC is not safe to use in SMP mode 00:29:13.504 EAL: TSC is not invariant 00:29:13.504 [2024-05-16 07:42:06.859039] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:13.504 NVMe Write/Read with End-to-End data protection test 00:29:13.504 Attaching to 0000:00:10.0 00:29:13.504 Attached to 0000:00:10.0 00:29:13.504 Cleaning up... 00:29:13.504 00:29:13.504 real 0m0.581s 00:29:13.504 user 0m0.023s 00:29:13.504 sys 0m0.557s 00:29:13.504 07:42:06 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:13.504 07:42:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:29:13.504 ************************************ 00:29:13.504 END TEST nvme_e2edp 00:29:13.504 ************************************ 00:29:13.504 07:42:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:13.504 07:42:06 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:13.504 07:42:06 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:13.504 07:42:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:13.504 ************************************ 00:29:13.504 START TEST nvme_reserve 00:29:13.504 ************************************ 00:29:13.504 07:42:06 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:14.069 EAL: TSC is not safe to use in SMP mode 00:29:14.069 EAL: TSC is not invariant 00:29:14.069 [2024-05-16 07:42:07.461318] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:14.069 ===================================================== 00:29:14.069 NVMe Controller at PCI bus 0, device 16, function 0 00:29:14.069 ===================================================== 00:29:14.069 Reservations: Not Supported 00:29:14.069 Reservation test passed 00:29:14.069 00:29:14.069 real 0m0.565s 00:29:14.069 user 0m0.020s 00:29:14.069 sys 0m0.544s 00:29:14.069 07:42:07 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:14.069 ************************************ 00:29:14.069 END TEST nvme_reserve 00:29:14.069 07:42:07 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:29:14.069 ************************************ 00:29:14.069 07:42:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:14.069 07:42:07 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:14.069 07:42:07 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:14.069 07:42:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:14.069 ************************************ 00:29:14.069 START TEST nvme_err_injection 00:29:14.069 ************************************ 00:29:14.069 07:42:07 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:14.636 EAL: TSC is not safe to use in SMP mode 00:29:14.636 EAL: TSC is not invariant 00:29:14.636 [2024-05-16 07:42:08.108765] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:14.636 NVMe Error Injection test 00:29:14.636 Attaching to 0000:00:10.0 00:29:14.636 Attached to 0000:00:10.0 00:29:14.636 0000:00:10.0: get features failed as expected 00:29:14.636 0000:00:10.0: get features successfully as expected 00:29:14.636 0000:00:10.0: read failed as expected 00:29:14.636 0000:00:10.0: read successfully as expected 00:29:14.636 Cleaning up... 00:29:14.636 00:29:14.636 real 0m0.608s 00:29:14.636 user 0m0.008s 00:29:14.636 sys 0m0.600s 00:29:14.636 07:42:08 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:14.636 07:42:08 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:29:14.636 ************************************ 00:29:14.636 END TEST nvme_err_injection 00:29:14.636 ************************************ 00:29:14.895 07:42:08 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:14.895 07:42:08 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:29:14.895 07:42:08 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:14.895 07:42:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:14.895 ************************************ 00:29:14.895 START TEST nvme_overhead 00:29:14.895 ************************************ 00:29:14.895 07:42:08 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:15.459 EAL: TSC is not safe to use in SMP mode 00:29:15.459 EAL: TSC is not invariant 00:29:15.459 [2024-05-16 07:42:08.718424] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:16.401 Initializing NVMe Controllers 00:29:16.401 Attaching to 0000:00:10.0 00:29:16.401 Attached to 0000:00:10.0 00:29:16.401 Initialization complete. Launching workers. 00:29:16.401 submit (in ns) avg, min, max = 12426.8, 9044.7, 106893.0 00:29:16.401 complete (in ns) avg, min, max = 9004.3, 6335.2, 79761.7 00:29:16.401 00:29:16.401 Submit histogram 00:29:16.401 ================ 00:29:16.401 Range in us Cumulative Count 00:29:16.401 9.021 - 9.082: 0.0146% ( 2) 00:29:16.401 9.082 - 9.143: 0.0510% ( 5) 00:29:16.401 9.143 - 9.204: 0.0655% ( 2) 00:29:16.401 9.204 - 9.265: 0.0874% ( 3) 00:29:16.401 9.265 - 9.326: 0.1019% ( 2) 00:29:16.401 9.326 - 9.387: 0.1311% ( 4) 00:29:16.401 9.387 - 9.448: 0.1529% ( 3) 00:29:16.401 9.448 - 9.509: 0.2985% ( 20) 00:29:16.401 9.509 - 9.569: 0.8009% ( 69) 00:29:16.401 9.569 - 9.630: 2.0095% ( 166) 00:29:16.401 9.630 - 9.691: 3.5311% ( 209) 00:29:16.401 9.691 - 9.752: 5.6134% ( 286) 00:29:16.401 9.752 - 9.813: 7.4190% ( 248) 00:29:16.401 9.813 - 9.874: 9.6105% ( 301) 00:29:16.401 9.874 - 9.935: 12.4936% ( 396) 00:29:16.401 9.935 - 9.996: 16.0466% ( 488) 00:29:16.401 9.996 - 10.057: 20.1820% ( 568) 00:29:16.401 10.057 - 10.118: 23.8005% ( 497) 00:29:16.401 10.118 - 10.179: 26.0939% ( 315) 00:29:16.401 10.179 - 10.240: 27.4044% ( 180) 00:29:16.401 10.240 - 10.301: 28.2417% ( 115) 00:29:16.401 10.301 - 10.362: 28.8315% ( 81) 00:29:16.401 10.362 - 10.423: 29.2246% ( 54) 00:29:16.401 10.423 - 10.484: 29.5886% ( 50) 00:29:16.401 10.484 - 10.545: 29.9818% ( 54) 00:29:16.401 10.545 - 10.606: 30.3750% ( 54) 00:29:16.401 10.606 - 10.667: 30.9210% ( 75) 00:29:16.401 10.667 - 10.728: 31.6127% ( 95) 00:29:16.401 10.728 - 10.789: 32.3990% ( 108) 00:29:16.401 10.789 - 10.849: 33.0761% ( 93) 00:29:16.401 10.849 - 10.910: 33.9206% ( 116) 00:29:16.401 10.910 - 10.971: 34.8890% ( 133) 00:29:16.401 10.971 - 11.032: 36.0684% ( 162) 00:29:16.401 11.032 - 11.093: 37.4809% ( 194) 00:29:16.401 11.093 - 11.154: 38.8569% ( 189) 00:29:16.401 11.154 - 11.215: 40.3786% ( 209) 00:29:16.401 11.215 - 11.276: 41.5872% ( 166) 00:29:16.401 11.276 - 11.337: 42.3444% ( 104) 00:29:16.401 11.337 - 11.398: 42.9778% ( 87) 00:29:16.401 11.398 - 11.459: 43.3564% ( 52) 00:29:16.401 11.459 - 11.520: 43.6767% ( 44) 00:29:16.401 11.520 - 11.581: 43.8806% ( 28) 00:29:16.401 11.581 - 11.642: 43.9898% ( 15) 00:29:16.401 11.642 - 11.703: 44.0699% ( 11) 00:29:16.401 11.703 - 11.764: 44.1500% ( 11) 00:29:16.401 11.764 - 11.825: 44.2155% ( 9) 00:29:16.401 11.825 - 11.886: 44.2738% ( 8) 00:29:16.401 11.886 - 11.947: 44.3466% ( 10) 00:29:16.401 11.947 - 12.008: 44.4339% ( 12) 00:29:16.401 12.008 - 12.069: 44.5359% ( 14) 00:29:16.401 12.069 - 12.129: 44.6232% ( 12) 00:29:16.401 12.129 - 12.190: 44.7106% ( 12) 00:29:16.401 12.190 - 12.251: 44.7761% ( 9) 00:29:16.401 12.251 - 12.312: 44.8416% ( 9) 00:29:16.401 12.312 - 12.373: 44.8853% ( 6) 00:29:16.401 12.373 - 12.434: 44.9654% ( 11) 00:29:16.401 12.434 - 12.495: 45.1911% ( 31) 00:29:16.401 12.495 - 12.556: 45.7226% ( 73) 00:29:16.401 12.556 - 12.617: 46.9967% ( 175) 00:29:16.401 12.617 - 12.678: 50.2075% ( 441) 00:29:16.401 12.678 - 12.739: 55.6316% ( 745) 00:29:16.401 12.739 - 12.800: 61.6818% ( 831) 00:29:16.401 12.800 - 12.861: 67.1933% ( 757) 00:29:16.401 12.861 - 12.922: 70.8409% ( 501) 00:29:16.401 12.922 - 12.983: 73.1999% ( 324) 00:29:16.401 12.983 - 13.044: 74.7579% ( 214) 00:29:16.401 13.044 - 13.105: 75.5297% ( 106) 00:29:16.401 13.105 - 13.166: 76.1485% ( 85) 00:29:16.401 13.166 - 13.227: 76.6509% ( 69) 00:29:16.401 13.227 - 13.288: 77.0732% ( 58) 00:29:16.401 13.288 - 13.349: 77.4445% ( 51) 00:29:16.401 13.349 - 13.409: 77.7066% ( 36) 00:29:16.401 13.409 - 13.470: 78.0488% ( 47) 00:29:16.401 13.470 - 13.531: 78.3618% ( 43) 00:29:16.401 13.531 - 13.592: 78.7186% ( 49) 00:29:16.401 13.592 - 13.653: 79.0171% ( 41) 00:29:16.401 13.653 - 13.714: 79.4321% ( 57) 00:29:16.401 13.714 - 13.775: 79.6360% ( 28) 00:29:16.401 13.775 - 13.836: 79.9345% ( 41) 00:29:16.401 13.836 - 13.897: 80.2257% ( 40) 00:29:16.401 13.897 - 13.958: 80.4951% ( 37) 00:29:16.401 13.958 - 14.019: 80.9683% ( 65) 00:29:16.401 14.019 - 14.080: 81.5508% ( 80) 00:29:16.401 14.080 - 14.141: 82.4317% ( 121) 00:29:16.401 14.141 - 14.202: 83.3127% ( 121) 00:29:16.401 14.202 - 14.263: 84.3684% ( 145) 00:29:16.401 14.263 - 14.324: 85.1911% ( 113) 00:29:16.401 14.324 - 14.385: 85.9046% ( 98) 00:29:16.401 14.385 - 14.446: 86.5016% ( 82) 00:29:16.401 14.446 - 14.507: 87.0622% ( 77) 00:29:16.401 14.507 - 14.568: 87.4336% ( 51) 00:29:16.401 14.568 - 14.629: 87.8558% ( 58) 00:29:16.401 14.629 - 14.689: 88.1107% ( 35) 00:29:16.401 14.689 - 14.750: 88.3873% ( 38) 00:29:16.401 14.750 - 14.811: 88.6276% ( 33) 00:29:16.401 14.811 - 14.872: 88.9261% ( 41) 00:29:16.401 14.872 - 14.933: 89.1664% ( 33) 00:29:16.401 14.933 - 14.994: 89.3775% ( 29) 00:29:16.401 14.994 - 15.055: 89.5959% ( 30) 00:29:16.401 15.055 - 15.116: 89.8289% ( 32) 00:29:16.401 15.116 - 15.177: 90.0109% ( 25) 00:29:16.401 15.177 - 15.238: 90.2148% ( 28) 00:29:16.401 15.238 - 15.299: 90.3968% ( 25) 00:29:16.401 15.299 - 15.360: 90.5788% ( 25) 00:29:16.401 15.360 - 15.421: 90.8919% ( 43) 00:29:16.401 15.421 - 15.482: 91.2414% ( 48) 00:29:16.401 15.482 - 15.543: 91.6054% ( 50) 00:29:16.401 15.543 - 15.604: 92.0131% ( 56) 00:29:16.402 15.604 - 15.726: 92.5009% ( 67) 00:29:16.402 15.726 - 15.848: 92.7921% ( 40) 00:29:16.402 15.848 - 15.969: 93.0615% ( 37) 00:29:16.402 15.969 - 16.091: 93.2508% ( 26) 00:29:16.402 16.091 - 16.213: 93.4110% ( 22) 00:29:16.402 16.213 - 16.335: 93.5639% ( 21) 00:29:16.402 16.335 - 16.457: 93.7605% ( 27) 00:29:16.402 16.457 - 16.579: 93.9061% ( 20) 00:29:16.402 16.579 - 16.701: 94.0153% ( 15) 00:29:16.402 16.701 - 16.823: 94.2191% ( 28) 00:29:16.402 16.823 - 16.945: 94.3138% ( 13) 00:29:16.402 16.945 - 17.067: 94.4230% ( 15) 00:29:16.402 17.067 - 17.189: 94.5468% ( 17) 00:29:16.402 17.189 - 17.310: 94.6705% ( 17) 00:29:16.402 17.310 - 17.432: 94.8890% ( 30) 00:29:16.402 17.432 - 17.554: 95.0419% ( 21) 00:29:16.402 17.554 - 17.676: 95.2020% ( 22) 00:29:16.402 17.676 - 17.798: 95.3477% ( 20) 00:29:16.402 17.798 - 17.920: 95.4714% ( 17) 00:29:16.402 17.920 - 18.042: 95.6316% ( 22) 00:29:16.402 18.042 - 18.164: 95.8209% ( 26) 00:29:16.402 18.164 - 18.286: 96.0466% ( 31) 00:29:16.402 18.286 - 18.408: 96.1995% ( 21) 00:29:16.402 18.408 - 18.529: 96.3669% ( 23) 00:29:16.402 18.529 - 18.651: 96.5926% ( 31) 00:29:16.402 18.651 - 18.773: 96.7164% ( 17) 00:29:16.402 18.773 - 18.895: 96.8984% ( 25) 00:29:16.402 18.895 - 19.017: 97.1241% ( 31) 00:29:16.402 19.017 - 19.139: 97.2697% ( 20) 00:29:16.402 19.139 - 19.261: 97.4663% ( 27) 00:29:16.402 19.261 - 19.383: 97.6411% ( 24) 00:29:16.402 19.383 - 19.505: 97.8376% ( 27) 00:29:16.402 19.505 - 19.627: 98.0051% ( 23) 00:29:16.402 19.627 - 19.749: 98.1434% ( 19) 00:29:16.402 19.749 - 19.870: 98.2381% ( 13) 00:29:16.402 19.870 - 19.992: 98.3837% ( 20) 00:29:16.402 19.992 - 20.114: 98.5075% ( 17) 00:29:16.402 20.114 - 20.236: 98.5803% ( 10) 00:29:16.402 20.236 - 20.358: 98.6604% ( 11) 00:29:16.402 20.358 - 20.480: 98.7332% ( 10) 00:29:16.402 20.480 - 20.602: 98.8351% ( 14) 00:29:16.402 20.602 - 20.724: 98.8861% ( 7) 00:29:16.402 20.724 - 20.846: 98.9370% ( 7) 00:29:16.402 20.846 - 20.968: 98.9661% ( 4) 00:29:16.402 20.968 - 21.089: 99.0098% ( 6) 00:29:16.402 21.089 - 21.211: 99.0826% ( 10) 00:29:16.402 21.211 - 21.333: 99.1263% ( 6) 00:29:16.402 21.333 - 21.455: 99.1482% ( 3) 00:29:16.402 21.455 - 21.577: 99.1773% ( 4) 00:29:16.402 21.577 - 21.699: 99.2064% ( 4) 00:29:16.402 21.699 - 21.821: 99.2210% ( 2) 00:29:16.402 21.821 - 21.943: 99.2355% ( 2) 00:29:16.402 22.065 - 22.187: 99.2647% ( 4) 00:29:16.402 22.187 - 22.309: 99.2792% ( 2) 00:29:16.402 22.309 - 22.430: 99.2938% ( 2) 00:29:16.402 22.430 - 22.552: 99.3011% ( 1) 00:29:16.402 22.552 - 22.674: 99.3302% ( 4) 00:29:16.402 22.674 - 22.796: 99.3447% ( 2) 00:29:16.402 22.796 - 22.918: 99.3666% ( 3) 00:29:16.402 22.918 - 23.040: 99.3957% ( 4) 00:29:16.402 23.040 - 23.162: 99.4030% ( 1) 00:29:16.402 23.162 - 23.284: 99.4103% ( 1) 00:29:16.402 23.284 - 23.406: 99.4248% ( 2) 00:29:16.402 23.406 - 23.528: 99.4467% ( 3) 00:29:16.402 23.528 - 23.649: 99.4612% ( 2) 00:29:16.402 23.649 - 23.771: 99.4904% ( 4) 00:29:16.402 23.771 - 23.893: 99.4976% ( 1) 00:29:16.402 23.893 - 24.015: 99.5049% ( 1) 00:29:16.402 24.015 - 24.137: 99.5195% ( 2) 00:29:16.402 24.137 - 24.259: 99.5268% ( 1) 00:29:16.402 24.259 - 24.381: 99.5413% ( 2) 00:29:16.402 24.381 - 24.503: 99.5486% ( 1) 00:29:16.402 24.503 - 24.625: 99.5632% ( 2) 00:29:16.402 24.625 - 24.747: 99.5777% ( 2) 00:29:16.402 24.747 - 24.868: 99.5923% ( 2) 00:29:16.402 24.868 - 24.990: 99.5996% ( 1) 00:29:16.402 24.990 - 25.112: 99.6214% ( 3) 00:29:16.402 25.112 - 25.234: 99.6287% ( 1) 00:29:16.402 25.234 - 25.356: 99.6505% ( 3) 00:29:16.402 25.356 - 25.478: 99.6651% ( 2) 00:29:16.402 25.478 - 25.600: 99.6869% ( 3) 00:29:16.402 25.722 - 25.844: 99.6942% ( 1) 00:29:16.402 25.844 - 25.966: 99.7015% ( 1) 00:29:16.402 25.966 - 26.088: 99.7161% ( 2) 00:29:16.402 26.088 - 26.209: 99.7233% ( 1) 00:29:16.402 26.209 - 26.331: 99.7525% ( 4) 00:29:16.402 26.331 - 26.453: 99.7743% ( 3) 00:29:16.402 26.453 - 26.575: 99.7889% ( 2) 00:29:16.402 26.697 - 26.819: 99.8107% ( 3) 00:29:16.402 27.185 - 27.307: 99.8253% ( 2) 00:29:16.402 27.307 - 27.428: 99.8325% ( 1) 00:29:16.402 27.428 - 27.550: 99.8398% ( 1) 00:29:16.402 28.038 - 28.160: 99.8544% ( 2) 00:29:16.402 28.648 - 28.769: 99.8617% ( 1) 00:29:16.402 29.135 - 29.257: 99.8689% ( 1) 00:29:16.402 29.379 - 29.501: 99.8762% ( 1) 00:29:16.402 30.110 - 30.232: 99.8835% ( 1) 00:29:16.402 30.598 - 30.720: 99.8908% ( 1) 00:29:16.402 30.720 - 30.842: 99.8981% ( 1) 00:29:16.402 30.842 - 30.964: 99.9126% ( 2) 00:29:16.402 31.086 - 31.208: 99.9199% ( 1) 00:29:16.402 31.451 - 31.695: 99.9272% ( 1) 00:29:16.402 31.695 - 31.939: 99.9345% ( 1) 00:29:16.402 31.939 - 32.183: 99.9418% ( 1) 00:29:16.402 36.328 - 36.571: 99.9490% ( 1) 00:29:16.402 39.985 - 40.228: 99.9563% ( 1) 00:29:16.402 42.179 - 42.423: 99.9636% ( 1) 00:29:16.402 58.270 - 58.514: 99.9709% ( 1) 00:29:16.402 59.002 - 59.246: 99.9782% ( 1) 00:29:16.402 74.118 - 74.605: 99.9854% ( 1) 00:29:16.402 98.986 - 99.474: 99.9927% ( 1) 00:29:16.402 106.788 - 107.276: 100.0000% ( 1) 00:29:16.402 00:29:16.402 Complete histogram 00:29:16.402 ================== 00:29:16.402 Range in us Cumulative Count 00:29:16.402 6.309 - 6.339: 0.0146% ( 2) 00:29:16.402 6.339 - 6.370: 0.0291% ( 2) 00:29:16.402 6.370 - 6.400: 0.0437% ( 2) 00:29:16.402 6.400 - 6.430: 0.0510% ( 1) 00:29:16.402 6.430 - 6.461: 0.0728% ( 3) 00:29:16.402 6.491 - 6.522: 0.0874% ( 2) 00:29:16.402 6.522 - 6.552: 0.0946% ( 1) 00:29:16.402 6.674 - 6.705: 0.1383% ( 6) 00:29:16.402 6.705 - 6.735: 0.4805% ( 47) 00:29:16.402 6.735 - 6.766: 1.5653% ( 149) 00:29:16.402 6.766 - 6.796: 3.2690% ( 234) 00:29:16.402 6.796 - 6.827: 4.5795% ( 180) 00:29:16.402 6.827 - 6.857: 5.5988% ( 140) 00:29:16.402 6.857 - 6.888: 6.2832% ( 94) 00:29:16.402 6.888 - 6.918: 6.8948% ( 84) 00:29:16.402 6.918 - 6.949: 7.7539% ( 118) 00:29:16.402 6.949 - 6.979: 9.8435% ( 287) 00:29:16.402 6.979 - 7.010: 13.2435% ( 467) 00:29:16.402 7.010 - 7.040: 16.5344% ( 452) 00:29:16.402 7.040 - 7.070: 19.1627% ( 361) 00:29:16.402 7.070 - 7.101: 21.3688% ( 303) 00:29:16.402 7.101 - 7.131: 23.1671% ( 247) 00:29:16.402 7.131 - 7.162: 24.4048% ( 170) 00:29:16.402 7.162 - 7.192: 25.3804% ( 134) 00:29:16.402 7.192 - 7.223: 26.1376% ( 104) 00:29:16.402 7.223 - 7.253: 26.7419% ( 83) 00:29:16.402 7.253 - 7.284: 27.1787% ( 60) 00:29:16.402 7.284 - 7.314: 27.4190% ( 33) 00:29:16.402 7.314 - 7.345: 27.7029% ( 39) 00:29:16.402 7.345 - 7.375: 27.9068% ( 28) 00:29:16.402 7.375 - 7.406: 28.1252% ( 30) 00:29:16.402 7.406 - 7.436: 28.2854% ( 22) 00:29:16.402 7.436 - 7.467: 28.4674% ( 25) 00:29:16.402 7.467 - 7.497: 28.8023% ( 46) 00:29:16.402 7.497 - 7.528: 29.2465% ( 61) 00:29:16.402 7.528 - 7.558: 29.9308% ( 94) 00:29:16.402 7.558 - 7.589: 30.5206% ( 81) 00:29:16.402 7.589 - 7.619: 31.0229% ( 69) 00:29:16.402 7.619 - 7.650: 31.3360% ( 43) 00:29:16.402 7.650 - 7.680: 31.6636% ( 45) 00:29:16.402 7.680 - 7.710: 31.9767% ( 43) 00:29:16.402 7.710 - 7.741: 32.5082% ( 73) 00:29:16.402 7.741 - 7.771: 33.6658% ( 159) 00:29:16.402 7.771 - 7.802: 35.2239% ( 214) 00:29:16.402 7.802 - 7.863: 37.6775% ( 337) 00:29:16.402 7.863 - 7.924: 39.2792% ( 220) 00:29:16.402 7.924 - 7.985: 40.3932% ( 153) 00:29:16.402 7.985 - 8.046: 41.0411% ( 89) 00:29:16.402 8.046 - 8.107: 41.5289% ( 67) 00:29:16.402 8.107 - 8.168: 41.8129% ( 39) 00:29:16.402 8.168 - 8.229: 42.1041% ( 40) 00:29:16.402 8.229 - 8.289: 42.3881% ( 39) 00:29:16.402 8.289 - 8.350: 42.5774% ( 26) 00:29:16.402 8.350 - 8.411: 42.7885% ( 29) 00:29:16.402 8.411 - 8.472: 42.9560% ( 23) 00:29:16.402 8.472 - 8.533: 43.1307% ( 24) 00:29:16.402 8.533 - 8.594: 43.2909% ( 22) 00:29:16.402 8.594 - 8.655: 43.5166% ( 31) 00:29:16.402 8.655 - 8.716: 43.7423% ( 31) 00:29:16.402 8.716 - 8.777: 44.0481% ( 42) 00:29:16.402 8.777 - 8.838: 44.3757% ( 45) 00:29:16.402 8.838 - 8.899: 44.5795% ( 28) 00:29:16.402 8.899 - 8.960: 44.7470% ( 23) 00:29:16.402 8.960 - 9.021: 44.9800% ( 32) 00:29:16.402 9.021 - 9.082: 45.4241% ( 61) 00:29:16.402 9.082 - 9.143: 48.5693% ( 432) 00:29:16.402 9.143 - 9.204: 57.2406% ( 1191) 00:29:16.402 9.204 - 9.265: 64.8125% ( 1040) 00:29:16.402 9.265 - 9.326: 69.0062% ( 576) 00:29:16.402 9.326 - 9.387: 71.7510% ( 377) 00:29:16.402 9.387 - 9.448: 73.5348% ( 245) 00:29:16.402 9.448 - 9.509: 74.7725% ( 170) 00:29:16.402 9.509 - 9.569: 75.6607% ( 122) 00:29:16.402 9.569 - 9.630: 76.3815% ( 99) 00:29:16.402 9.630 - 9.691: 77.0295% ( 89) 00:29:16.402 9.691 - 9.752: 77.6047% ( 79) 00:29:16.402 9.752 - 9.813: 78.0415% ( 60) 00:29:16.402 9.813 - 9.874: 78.4783% ( 60) 00:29:16.402 9.874 - 9.935: 78.8569% ( 52) 00:29:16.402 9.935 - 9.996: 79.1700% ( 43) 00:29:16.403 9.996 - 10.057: 79.5340% ( 50) 00:29:16.403 10.057 - 10.118: 79.9490% ( 57) 00:29:16.403 10.118 - 10.179: 80.4878% ( 74) 00:29:16.403 10.179 - 10.240: 81.6600% ( 161) 00:29:16.403 10.240 - 10.301: 83.5821% ( 264) 00:29:16.403 10.301 - 10.362: 84.9509% ( 188) 00:29:16.403 10.362 - 10.423: 85.7809% ( 114) 00:29:16.403 10.423 - 10.484: 86.4652% ( 94) 00:29:16.403 10.484 - 10.545: 87.0695% ( 83) 00:29:16.403 10.545 - 10.606: 87.6884% ( 85) 00:29:16.403 10.606 - 10.667: 88.0888% ( 55) 00:29:16.403 10.667 - 10.728: 88.3946% ( 42) 00:29:16.403 10.728 - 10.789: 88.6786% ( 39) 00:29:16.403 10.789 - 10.849: 88.9334% ( 35) 00:29:16.403 10.849 - 10.910: 89.1809% ( 34) 00:29:16.403 10.910 - 10.971: 89.3848% ( 28) 00:29:16.403 10.971 - 11.032: 89.6178% ( 32) 00:29:16.403 11.032 - 11.093: 89.8143% ( 27) 00:29:16.403 11.093 - 11.154: 90.0182% ( 28) 00:29:16.403 11.154 - 11.215: 90.2512% ( 32) 00:29:16.403 11.215 - 11.276: 90.7535% ( 69) 00:29:16.403 11.276 - 11.337: 91.2414% ( 67) 00:29:16.403 11.337 - 11.398: 91.6199% ( 52) 00:29:16.403 11.398 - 11.459: 91.9549% ( 46) 00:29:16.403 11.459 - 11.520: 92.1806% ( 31) 00:29:16.403 11.520 - 11.581: 92.4063% ( 31) 00:29:16.403 11.581 - 11.642: 92.5519% ( 20) 00:29:16.403 11.642 - 11.703: 92.7193% ( 23) 00:29:16.403 11.703 - 11.764: 92.8722% ( 21) 00:29:16.403 11.764 - 11.825: 93.0615% ( 26) 00:29:16.403 11.825 - 11.886: 93.1635% ( 14) 00:29:16.403 11.886 - 11.947: 93.2872% ( 17) 00:29:16.403 11.947 - 12.008: 93.4037% ( 16) 00:29:16.403 12.008 - 12.069: 93.5056% ( 14) 00:29:16.403 12.069 - 12.129: 93.6731% ( 23) 00:29:16.403 12.129 - 12.190: 93.7605% ( 12) 00:29:16.403 12.190 - 12.251: 93.8406% ( 11) 00:29:16.403 12.251 - 12.312: 93.9352% ( 13) 00:29:16.403 12.312 - 12.373: 94.0371% ( 14) 00:29:16.403 12.373 - 12.434: 94.1900% ( 21) 00:29:16.403 12.434 - 12.495: 94.2847% ( 13) 00:29:16.403 12.495 - 12.556: 94.4303% ( 20) 00:29:16.403 12.556 - 12.617: 94.5177% ( 12) 00:29:16.403 12.617 - 12.678: 94.6196% ( 14) 00:29:16.403 12.678 - 12.739: 94.6851% ( 9) 00:29:16.403 12.739 - 12.800: 94.7798% ( 13) 00:29:16.403 12.800 - 12.861: 94.9108% ( 18) 00:29:16.403 12.861 - 12.922: 95.0419% ( 18) 00:29:16.403 12.922 - 12.983: 95.1584% ( 16) 00:29:16.403 12.983 - 13.044: 95.2821% ( 17) 00:29:16.403 13.044 - 13.105: 95.3841% ( 14) 00:29:16.403 13.105 - 13.166: 95.5005% ( 16) 00:29:16.403 13.166 - 13.227: 95.5806% ( 11) 00:29:16.403 13.227 - 13.288: 95.6898% ( 15) 00:29:16.403 13.288 - 13.349: 95.7627% ( 10) 00:29:16.403 13.349 - 13.409: 95.8646% ( 14) 00:29:16.403 13.409 - 13.470: 95.9592% ( 13) 00:29:16.403 13.470 - 13.531: 96.0612% ( 14) 00:29:16.403 13.531 - 13.592: 96.1849% ( 17) 00:29:16.403 13.592 - 13.653: 96.2796% ( 13) 00:29:16.403 13.653 - 13.714: 96.4179% ( 19) 00:29:16.403 13.714 - 13.775: 96.5854% ( 23) 00:29:16.403 13.775 - 13.836: 96.6727% ( 12) 00:29:16.403 13.836 - 13.897: 96.7819% ( 15) 00:29:16.403 13.897 - 13.958: 96.8548% ( 10) 00:29:16.403 13.958 - 14.019: 96.9494% ( 13) 00:29:16.403 14.019 - 14.080: 97.0659% ( 16) 00:29:16.403 14.080 - 14.141: 97.1533% ( 12) 00:29:16.403 14.141 - 14.202: 97.2188% ( 9) 00:29:16.403 14.202 - 14.263: 97.2770% ( 8) 00:29:16.403 14.263 - 14.324: 97.3644% ( 12) 00:29:16.403 14.324 - 14.385: 97.4736% ( 15) 00:29:16.403 14.385 - 14.446: 97.5610% ( 12) 00:29:16.403 14.446 - 14.507: 97.6556% ( 13) 00:29:16.403 14.507 - 14.568: 97.7430% ( 12) 00:29:16.403 14.568 - 14.629: 97.8085% ( 9) 00:29:16.403 14.629 - 14.689: 97.8959% ( 12) 00:29:16.403 14.689 - 14.750: 97.9541% ( 8) 00:29:16.403 14.750 - 14.811: 98.0269% ( 10) 00:29:16.403 14.811 - 14.872: 98.0779% ( 7) 00:29:16.403 14.872 - 14.933: 98.1871% ( 15) 00:29:16.403 14.933 - 14.994: 98.2235% ( 5) 00:29:16.403 14.994 - 15.055: 98.2599% ( 5) 00:29:16.403 15.055 - 15.116: 98.2818% ( 3) 00:29:16.403 15.116 - 15.177: 98.3036% ( 3) 00:29:16.403 15.177 - 15.238: 98.3254% ( 3) 00:29:16.403 15.238 - 15.299: 98.3473% ( 3) 00:29:16.403 15.299 - 15.360: 98.3764% ( 4) 00:29:16.403 15.360 - 15.421: 98.3983% ( 3) 00:29:16.403 15.421 - 15.482: 98.4201% ( 3) 00:29:16.403 15.482 - 15.543: 98.4492% ( 4) 00:29:16.403 15.543 - 15.604: 98.4565% ( 1) 00:29:16.403 15.604 - 15.726: 98.5075% ( 7) 00:29:16.403 15.726 - 15.848: 98.5366% ( 4) 00:29:16.403 15.848 - 15.969: 98.5876% ( 7) 00:29:16.403 15.969 - 16.091: 98.6385% ( 7) 00:29:16.403 16.091 - 16.213: 98.6676% ( 4) 00:29:16.403 16.213 - 16.335: 98.6895% ( 3) 00:29:16.403 16.335 - 16.457: 98.7259% ( 5) 00:29:16.403 16.457 - 16.579: 98.7623% ( 5) 00:29:16.403 16.579 - 16.701: 98.7841% ( 3) 00:29:16.403 16.701 - 16.823: 98.8278% ( 6) 00:29:16.403 16.823 - 16.945: 98.8569% ( 4) 00:29:16.403 16.945 - 17.067: 98.8715% ( 2) 00:29:16.403 17.067 - 17.189: 98.8933% ( 3) 00:29:16.403 17.189 - 17.310: 98.9297% ( 5) 00:29:16.403 17.310 - 17.432: 98.9516% ( 3) 00:29:16.403 17.432 - 17.554: 98.9880% ( 5) 00:29:16.403 17.554 - 17.676: 99.0025% ( 2) 00:29:16.403 17.676 - 17.798: 99.0171% ( 2) 00:29:16.403 17.798 - 17.920: 99.0317% ( 2) 00:29:16.403 17.920 - 18.042: 99.0681% ( 5) 00:29:16.403 18.042 - 18.164: 99.0826% ( 2) 00:29:16.403 18.164 - 18.286: 99.1409% ( 8) 00:29:16.403 18.286 - 18.408: 99.2210% ( 11) 00:29:16.403 18.408 - 18.529: 99.2719% ( 7) 00:29:16.403 18.529 - 18.651: 99.2865% ( 2) 00:29:16.403 18.773 - 18.895: 99.3302% ( 6) 00:29:16.403 18.895 - 19.017: 99.3375% ( 1) 00:29:16.403 19.017 - 19.139: 99.3447% ( 1) 00:29:16.403 19.139 - 19.261: 99.3811% ( 5) 00:29:16.403 19.261 - 19.383: 99.4030% ( 3) 00:29:16.403 19.383 - 19.505: 99.4175% ( 2) 00:29:16.403 19.505 - 19.627: 99.4321% ( 2) 00:29:16.403 19.627 - 19.749: 99.4539% ( 3) 00:29:16.403 19.749 - 19.870: 99.4612% ( 1) 00:29:16.403 19.870 - 19.992: 99.4685% ( 1) 00:29:16.403 19.992 - 20.114: 99.4758% ( 1) 00:29:16.403 20.236 - 20.358: 99.4976% ( 3) 00:29:16.403 20.480 - 20.602: 99.5268% ( 4) 00:29:16.403 20.602 - 20.724: 99.5559% ( 4) 00:29:16.403 20.724 - 20.846: 99.5704% ( 2) 00:29:16.403 20.846 - 20.968: 99.5850% ( 2) 00:29:16.403 20.968 - 21.089: 99.5923% ( 1) 00:29:16.403 21.211 - 21.333: 99.6068% ( 2) 00:29:16.403 21.333 - 21.455: 99.6214% ( 2) 00:29:16.403 21.577 - 21.699: 99.6505% ( 4) 00:29:16.403 21.699 - 21.821: 99.6797% ( 4) 00:29:16.403 21.821 - 21.943: 99.6869% ( 1) 00:29:16.403 21.943 - 22.065: 99.7015% ( 2) 00:29:16.403 22.065 - 22.187: 99.7088% ( 1) 00:29:16.403 22.309 - 22.430: 99.7161% ( 1) 00:29:16.403 22.430 - 22.552: 99.7233% ( 1) 00:29:16.403 22.796 - 22.918: 99.7306% ( 1) 00:29:16.403 23.040 - 23.162: 99.7379% ( 1) 00:29:16.403 23.162 - 23.284: 99.7452% ( 1) 00:29:16.403 23.406 - 23.528: 99.7525% ( 1) 00:29:16.403 23.649 - 23.771: 99.7597% ( 1) 00:29:16.403 23.771 - 23.893: 99.7743% ( 2) 00:29:16.403 24.381 - 24.503: 99.7889% ( 2) 00:29:16.403 24.503 - 24.625: 99.7961% ( 1) 00:29:16.403 24.747 - 24.868: 99.8034% ( 1) 00:29:16.403 24.868 - 24.990: 99.8107% ( 1) 00:29:16.403 24.990 - 25.112: 99.8180% ( 1) 00:29:16.403 25.112 - 25.234: 99.8253% ( 1) 00:29:16.403 25.234 - 25.356: 99.8325% ( 1) 00:29:16.403 25.844 - 25.966: 99.8398% ( 1) 00:29:16.403 26.088 - 26.209: 99.8544% ( 2) 00:29:16.403 26.209 - 26.331: 99.8617% ( 1) 00:29:16.403 26.331 - 26.453: 99.8689% ( 1) 00:29:16.403 28.038 - 28.160: 99.8762% ( 1) 00:29:16.403 28.404 - 28.526: 99.8908% ( 2) 00:29:16.403 28.769 - 28.891: 99.8981% ( 1) 00:29:16.403 29.379 - 29.501: 99.9054% ( 1) 00:29:16.403 29.501 - 29.623: 99.9126% ( 1) 00:29:16.403 29.867 - 29.988: 99.9199% ( 1) 00:29:16.403 30.476 - 30.598: 99.9272% ( 1) 00:29:16.403 32.427 - 32.670: 99.9345% ( 1) 00:29:16.403 34.621 - 34.865: 99.9418% ( 1) 00:29:16.403 35.108 - 35.352: 99.9490% ( 1) 00:29:16.403 39.741 - 39.985: 99.9563% ( 1) 00:29:16.403 40.228 - 40.472: 99.9782% ( 3) 00:29:16.403 42.179 - 42.423: 99.9854% ( 1) 00:29:16.403 51.200 - 51.444: 99.9927% ( 1) 00:29:16.403 79.482 - 79.969: 100.0000% ( 1) 00:29:16.403 00:29:16.403 00:29:16.403 real 0m1.554s 00:29:16.403 user 0m1.025s 00:29:16.403 sys 0m0.528s 00:29:16.403 07:42:09 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:16.403 07:42:09 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:29:16.403 ************************************ 00:29:16.403 END TEST nvme_overhead 00:29:16.403 ************************************ 00:29:16.403 07:42:09 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:16.403 07:42:09 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:29:16.403 07:42:09 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:16.403 07:42:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:16.403 ************************************ 00:29:16.403 START TEST nvme_arbitration 00:29:16.403 ************************************ 00:29:16.404 07:42:09 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:16.996 EAL: TSC is not safe to use in SMP mode 00:29:16.996 EAL: TSC is not invariant 00:29:16.996 [2024-05-16 07:42:10.328884] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:21.176 Initializing NVMe Controllers 00:29:21.176 Attaching to 0000:00:10.0 00:29:21.176 Attached to 0000:00:10.0 00:29:21.176 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:29:21.176 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:29:21.176 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:29:21.176 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:29:21.176 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:29:21.176 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:29:21.176 Initialization complete. Launching workers. 00:29:21.176 Starting thread on core 1 with urgent priority queue 00:29:21.176 Starting thread on core 2 with urgent priority queue 00:29:21.176 Starting thread on core 3 with urgent priority queue 00:29:21.176 Starting thread on core 0 with urgent priority queue 00:29:21.176 QEMU NVMe Ctrl (12340 ) core 0: 5466.00 IO/s 18.29 secs/100000 ios 00:29:21.176 QEMU NVMe Ctrl (12340 ) core 1: 5863.67 IO/s 17.05 secs/100000 ios 00:29:21.176 QEMU NVMe Ctrl (12340 ) core 2: 5742.00 IO/s 17.42 secs/100000 ios 00:29:21.176 QEMU NVMe Ctrl (12340 ) core 3: 5760.33 IO/s 17.36 secs/100000 ios 00:29:21.176 ======================================================== 00:29:21.176 00:29:21.176 00:29:21.176 real 0m4.148s 00:29:21.176 user 0m12.632s 00:29:21.176 sys 0m0.546s 00:29:21.176 07:42:13 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:21.176 07:42:13 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:29:21.176 ************************************ 00:29:21.176 END TEST nvme_arbitration 00:29:21.176 ************************************ 00:29:21.176 07:42:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:29:21.176 07:42:13 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:29:21.176 07:42:13 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:21.176 07:42:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:21.176 ************************************ 00:29:21.176 START TEST nvme_single_aen 00:29:21.176 ************************************ 00:29:21.176 07:42:13 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:29:21.176 EAL: TSC is not safe to use in SMP mode 00:29:21.176 EAL: TSC is not invariant 00:29:21.176 [2024-05-16 07:42:14.558075] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:21.176 Asynchronous Event Request test 00:29:21.176 Attaching to 0000:00:10.0 00:29:21.176 Attached to 0000:00:10.0 00:29:21.176 Reset controller to setup AER completions for this process 00:29:21.176 Registering asynchronous event callbacks... 00:29:21.176 Getting orig temperature thresholds of all controllers 00:29:21.176 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:21.176 Setting all controllers temperature threshold low to trigger AER 00:29:21.176 Waiting for all controllers temperature threshold to be set lower 00:29:21.176 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:21.176 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:29:21.176 Waiting for all controllers to trigger AER and reset threshold 00:29:21.176 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:21.176 Cleaning up... 00:29:21.176 00:29:21.176 real 0m0.619s 00:29:21.176 user 0m0.011s 00:29:21.176 sys 0m0.600s 00:29:21.176 07:42:14 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:21.176 07:42:14 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:29:21.176 ************************************ 00:29:21.176 END TEST nvme_single_aen 00:29:21.176 ************************************ 00:29:21.176 07:42:14 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:29:21.176 07:42:14 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:21.176 07:42:14 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:21.176 07:42:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:21.176 ************************************ 00:29:21.176 START TEST nvme_doorbell_aers 00:29:21.176 ************************************ 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:29:21.176 07:42:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:21.741 EAL: TSC is not safe to use in SMP mode 00:29:21.741 EAL: TSC is not invariant 00:29:21.741 [2024-05-16 07:42:15.240844] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:21.999 Executing: test_write_invalid_db 00:29:21.999 Waiting for AER completion... 00:29:21.999 Asynchronous Event received. 00:29:21.999 Error Informaton Log Page received. 00:29:21.999 Success: test_write_invalid_db 00:29:21.999 00:29:21.999 Executing: test_invalid_db_write_overflow_sq 00:29:21.999 Waiting for AER completion... 00:29:21.999 Asynchronous Event received. 00:29:21.999 Error Informaton Log Page received. 00:29:21.999 Success: test_invalid_db_write_overflow_sq 00:29:21.999 00:29:21.999 Executing: test_invalid_db_write_overflow_cq 00:29:21.999 Waiting for AER completion... 00:29:21.999 Asynchronous Event received. 00:29:21.999 Error Informaton Log Page received. 00:29:21.999 Success: test_invalid_db_write_overflow_cq 00:29:21.999 00:29:21.999 00:29:21.999 real 0m0.636s 00:29:21.999 user 0m0.022s 00:29:21.999 sys 0m0.634s 00:29:21.999 07:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:21.999 ************************************ 00:29:21.999 07:42:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:29:21.999 END TEST nvme_doorbell_aers 00:29:21.999 ************************************ 00:29:21.999 07:42:15 nvme -- nvme/nvme.sh@97 -- # uname 00:29:21.999 07:42:15 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:29:21.999 07:42:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:21.999 07:42:15 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:21.999 07:42:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:21.999 07:42:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:21.999 ************************************ 00:29:21.999 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:21.999 ************************************ 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:21.999 * Looking for test storage... 00:29:21.999 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:21.999 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:22.000 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=67422 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 67422 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 67422 ']' 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:22.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:22.257 07:42:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:22.257 [2024-05-16 07:42:15.580224] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:22.257 [2024-05-16 07:42:15.580426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:29:22.822 EAL: TSC is not safe to use in SMP mode 00:29:22.822 EAL: TSC is not invariant 00:29:22.822 [2024-05-16 07:42:16.094735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.822 [2024-05-16 07:42:16.181340] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:22.822 [2024-05-16 07:42:16.181404] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:22.822 [2024-05-16 07:42:16.181413] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:29:22.822 [2024-05-16 07:42:16.181420] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:29:22.822 [2024-05-16 07:42:16.185164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.822 [2024-05-16 07:42:16.185233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.822 [2024-05-16 07:42:16.185394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.822 [2024-05-16 07:42:16.185392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:23.387 [2024-05-16 07:42:16.764809] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:23.387 nvme0n1 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:23.387 true 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715845336 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=67434 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:23.387 07:42:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:25.963 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:25.964 [2024-05-16 07:42:18.965538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:29:25.964 [2024-05-16 07:42:18.966805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:25.964 [2024-05-16 07:42:18.966840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:25.964 [2024-05-16 07:42:18.966851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.964 [2024-05-16 07:42:18.967670] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.964 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 67434 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 67434 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 67434 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.964 07:42:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.CT8MTB 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.XEli9m 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 67422 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 67422 ']' 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 67422 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps -c -o command 67422 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # tail -1 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:29:25.964 killing process with pid 67422 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67422' 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 67422 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 67422 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:25.964 00:29:25.964 real 0m3.961s 00:29:25.964 user 0m13.125s 00:29:25.964 sys 0m0.859s 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:25.964 07:42:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:29:25.964 ************************************ 00:29:25.964 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:25.964 ************************************ 00:29:25.964 07:42:19 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:25.964 07:42:19 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:25.964 07:42:19 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:25.964 07:42:19 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:25.964 07:42:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:25.964 ************************************ 00:29:25.964 START TEST nvme_fio 00:29:25.964 ************************************ 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:25.964 07:42:19 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:25.964 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:26.529 EAL: TSC is not safe to use in SMP mode 00:29:26.529 EAL: TSC is not invariant 00:29:26.529 [2024-05-16 07:42:19.892462] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:26.529 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:26.529 07:42:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:27.126 EAL: TSC is not safe to use in SMP mode 00:29:27.126 EAL: TSC is not invariant 00:29:27.126 [2024-05-16 07:42:20.498146] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:27.126 07:42:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:29:27.126 07:42:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:27.126 07:42:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:29:27.126 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:27.126 fio-3.35 00:29:27.126 Starting 1 thread 00:29:27.694 EAL: TSC is not safe to use in SMP mode 00:29:27.694 EAL: TSC is not invariant 00:29:27.694 [2024-05-16 07:42:21.205260] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:31.084 00:29:31.084 test: (groupid=0, jobs=1): err= 0: pid=102870: Thu May 16 07:42:24 2024 00:29:31.084 read: IOPS=44.4k, BW=173MiB/s (182MB/s)(347MiB/2002msec) 00:29:31.084 slat (nsec): min=432, max=22067, avg=672.57, stdev=424.72 00:29:31.084 clat (usec): min=287, max=3662, avg=1440.94, stdev=331.10 00:29:31.084 lat (usec): min=288, max=3668, avg=1441.62, stdev=331.16 00:29:31.084 clat percentiles (usec): 00:29:31.084 | 1.00th=[ 898], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1205], 00:29:31.084 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1385], 60.00th=[ 1434], 00:29:31.084 | 70.00th=[ 1500], 80.00th=[ 1631], 90.00th=[ 1860], 95.00th=[ 2057], 00:29:31.084 | 99.00th=[ 2737], 99.50th=[ 3032], 99.90th=[ 3326], 99.95th=[ 3425], 00:29:31.084 | 99.99th=[ 3556] 00:29:31.084 bw ( KiB/s): min=147162, max=197816, per=98.91%, avg=175726.67, stdev=25940.40, samples=3 00:29:31.084 iops : min=36790, max=49454, avg=43931.33, stdev=6485.31, samples=3 00:29:31.084 write: IOPS=44.3k, BW=173MiB/s (181MB/s)(346MiB/2002msec); 0 zone resets 00:29:31.084 slat (nsec): min=451, max=19756, avg=832.67, stdev=442.57 00:29:31.084 clat (usec): min=298, max=3602, avg=1440.90, stdev=331.71 00:29:31.084 lat (usec): min=299, max=3605, avg=1441.73, stdev=331.77 00:29:31.084 clat percentiles (usec): 00:29:31.084 | 1.00th=[ 889], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1205], 00:29:31.084 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1369], 60.00th=[ 1434], 00:29:31.084 | 70.00th=[ 1500], 80.00th=[ 1631], 90.00th=[ 1860], 95.00th=[ 2057], 00:29:31.084 | 99.00th=[ 2737], 99.50th=[ 3032], 99.90th=[ 3294], 99.95th=[ 3392], 00:29:31.084 | 99.99th=[ 3523] 00:29:31.084 bw ( KiB/s): min=147256, max=196501, per=98.70%, avg=174845.33, stdev=25153.01, samples=3 00:29:31.084 iops : min=36814, max=49125, avg=43711.00, stdev=6288.06, samples=3 00:29:31.084 lat (usec) : 500=0.06%, 750=0.38%, 1000=2.03% 00:29:31.084 lat (msec) : 2=91.38%, 4=6.15% 00:29:31.084 cpu : usr=100.00%, sys=0.00%, ctx=24, majf=0, minf=2 00:29:31.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:29:31.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:31.084 issued rwts: total=88916,88664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:31.084 00:29:31.084 Run status group 0 (all jobs): 00:29:31.084 READ: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=347MiB (364MB), run=2002-2002msec 00:29:31.084 WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2002-2002msec 00:29:31.650 07:42:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:31.650 07:42:25 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:29:31.650 00:29:31.650 real 0m5.762s 00:29:31.650 user 0m2.442s 00:29:31.650 sys 0m3.248s 00:29:31.650 07:42:25 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.650 ************************************ 00:29:31.650 END TEST nvme_fio 00:29:31.650 ************************************ 00:29:31.650 07:42:25 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:29:31.650 00:29:31.650 real 0m25.740s 00:29:31.650 user 0m32.221s 00:29:31.650 sys 0m12.416s 00:29:31.650 07:42:25 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.650 ************************************ 00:29:31.650 END TEST nvme 00:29:31.650 ************************************ 00:29:31.650 07:42:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:29:31.650 07:42:25 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:29:31.650 07:42:25 -- spdk/autotest.sh@217 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:31.650 07:42:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:31.650 07:42:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.650 07:42:25 -- common/autotest_common.sh@10 -- # set +x 00:29:31.650 ************************************ 00:29:31.650 START TEST nvme_scc 00:29:31.650 ************************************ 00:29:31.650 07:42:25 nvme_scc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:31.908 * Looking for test storage... 00:29:31.908 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:29:31.908 07:42:25 nvme_scc -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:31.908 07:42:25 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.908 07:42:25 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.908 07:42:25 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.908 07:42:25 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:31.908 07:42:25 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:31.908 07:42:25 nvme_scc -- paths/export.sh@4 -- # export PATH 00:29:31.908 07:42:25 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:31.908 07:42:25 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:29:31.908 07:42:25 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.908 07:42:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:29:31.908 07:42:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:29:31.908 07:42:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:29:31.908 00:29:31.908 real 0m0.187s 00:29:31.908 user 0m0.156s 00:29:31.908 sys 0m0.126s 00:29:31.908 07:42:25 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.908 07:42:25 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:29:31.908 ************************************ 00:29:31.908 END TEST nvme_scc 00:29:31.908 ************************************ 00:29:31.908 07:42:25 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:29:31.908 07:42:25 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:29:31.908 07:42:25 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:29:31.908 07:42:25 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:29:31.908 07:42:25 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:29:31.908 07:42:25 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:31.908 07:42:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:31.908 07:42:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.908 07:42:25 -- common/autotest_common.sh@10 -- # set +x 00:29:31.908 ************************************ 00:29:31.908 START TEST nvme_rpc 00:29:31.908 ************************************ 00:29:31.909 07:42:25 nvme_rpc -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:32.167 * Looking for test storage... 00:29:32.167 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1510 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67676 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:32.167 07:42:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67676 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 67676 ']' 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:32.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:32.167 07:42:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:32.167 [2024-05-16 07:42:25.646843] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:32.167 [2024-05-16 07:42:25.647021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:29:32.815 EAL: TSC is not safe to use in SMP mode 00:29:32.815 EAL: TSC is not invariant 00:29:32.815 [2024-05-16 07:42:26.154245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:32.815 [2024-05-16 07:42:26.251006] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:32.815 [2024-05-16 07:42:26.251086] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:32.815 [2024-05-16 07:42:26.254513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.815 [2024-05-16 07:42:26.254503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.074 07:42:26 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:33.074 07:42:26 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:29:33.074 07:42:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:33.333 [2024-05-16 07:42:26.766064] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:29:33.333 Nvme0n1 00:29:33.333 07:42:26 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:33.333 07:42:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:33.590 request: 00:29:33.590 { 00:29:33.590 "filename": "non_existing_file", 00:29:33.590 "bdev_name": "Nvme0n1", 00:29:33.590 "method": "bdev_nvme_apply_firmware", 00:29:33.590 "req_id": 1 00:29:33.590 } 00:29:33.590 Got JSON-RPC error response 00:29:33.590 response: 00:29:33.590 { 00:29:33.590 "code": -32603, 00:29:33.590 "message": "open file failed." 00:29:33.590 } 00:29:33.590 07:42:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:33.590 07:42:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:33.590 07:42:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:33.848 07:42:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:33.848 07:42:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67676 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 67676 ']' 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 67676 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@954 -- # ps -c -o command 67676 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@954 -- # tail -1 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:29:33.848 killing process with pid 67676 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67676' 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@965 -- # kill 67676 00:29:33.848 07:42:27 nvme_rpc -- common/autotest_common.sh@970 -- # wait 67676 00:29:34.106 00:29:34.106 real 0m2.083s 00:29:34.106 user 0m3.521s 00:29:34.106 sys 0m0.818s 00:29:34.106 07:42:27 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:34.106 07:42:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:34.106 ************************************ 00:29:34.106 END TEST nvme_rpc 00:29:34.106 ************************************ 00:29:34.106 07:42:27 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:34.106 07:42:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:34.106 07:42:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:34.106 07:42:27 -- common/autotest_common.sh@10 -- # set +x 00:29:34.106 ************************************ 00:29:34.106 START TEST nvme_rpc_timeouts 00:29:34.106 ************************************ 00:29:34.106 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:34.365 * Looking for test storage... 00:29:34.365 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67713 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67713 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67741 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:34.365 07:42:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67741 00:29:34.365 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 67741 ']' 00:29:34.365 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.365 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:34.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.365 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.365 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:34.365 07:42:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:29:34.365 [2024-05-16 07:42:27.724838] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:34.365 [2024-05-16 07:42:27.725074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:29:34.648 EAL: TSC is not safe to use in SMP mode 00:29:34.648 EAL: TSC is not invariant 00:29:34.648 [2024-05-16 07:42:28.174182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:34.906 [2024-05-16 07:42:28.268129] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:29:34.906 [2024-05-16 07:42:28.268200] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:29:34.906 [2024-05-16 07:42:28.271618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.906 [2024-05-16 07:42:28.271610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.474 07:42:28 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:35.474 07:42:28 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:29:35.474 Checking default timeout settings: 00:29:35.474 07:42:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:35.474 07:42:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:35.733 Making settings changes with rpc: 00:29:35.733 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:35.733 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:35.992 Check default vs. modified settings: 00:29:35.992 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:35.992 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:36.256 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:36.256 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:36.256 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67713 00:29:36.256 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.256 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67713 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:36.257 Setting action_on_timeout is changed as expected. 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67713 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67713 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:36.257 Setting timeout_us is changed as expected. 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67713 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67713 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:36.257 Setting timeout_admin_us is changed as expected. 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67713 /tmp/settings_modified_67713 00:29:36.257 07:42:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67741 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 67741 ']' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 67741 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' FreeBSD = Linux ']' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps -c -o command 67741 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # tail -1 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=spdk_tgt 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' spdk_tgt = sudo ']' 00:29:36.257 killing process with pid 67741 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67741' 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 67741 00:29:36.257 07:42:29 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 67741 00:29:36.520 RPC TIMEOUT SETTING TEST PASSED. 00:29:36.520 07:42:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:36.520 00:29:36.520 real 0m2.464s 00:29:36.520 user 0m4.685s 00:29:36.520 sys 0m0.832s 00:29:36.520 07:42:30 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:36.520 07:42:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:29:36.520 ************************************ 00:29:36.520 END TEST nvme_rpc_timeouts 00:29:36.520 ************************************ 00:29:36.520 07:42:30 -- spdk/autotest.sh@239 -- # uname -s 00:29:36.520 07:42:30 -- spdk/autotest.sh@239 -- # '[' FreeBSD = Linux ']' 00:29:36.520 07:42:30 -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]] 00:29:36.520 07:42:30 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:29:36.520 07:42:30 -- spdk/autotest.sh@256 -- # timing_exit lib 00:29:36.520 07:42:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.520 07:42:30 -- common/autotest_common.sh@10 -- # set +x 00:29:36.777 07:42:30 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:36.777 07:42:30 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:29:36.777 07:42:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:36.777 07:42:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:36.777 07:42:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:36.777 07:42:30 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:29:36.777 07:42:30 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:29:36.777 07:42:30 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:36.777 07:42:30 -- common/autotest_common.sh@10 -- # set +x 00:29:36.777 07:42:30 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:29:36.777 07:42:30 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:36.777 07:42:30 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:36.777 07:42:30 -- common/autotest_common.sh@10 -- # set +x 00:29:37.343 setup.sh cleanup function not yet supported on FreeBSD 00:29:37.343 07:42:30 -- common/autotest_common.sh@1447 -- # return 0 00:29:37.343 07:42:30 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:29:37.343 07:42:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.343 07:42:30 -- common/autotest_common.sh@10 -- # set +x 00:29:37.343 07:42:30 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:29:37.343 07:42:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.343 07:42:30 -- common/autotest_common.sh@10 -- # set +x 00:29:37.343 07:42:30 -- spdk/autotest.sh@383 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:37.343 07:42:30 -- spdk/autotest.sh@385 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:37.343 07:42:30 -- spdk/autotest.sh@387 -- # hash lcov 00:29:37.343 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 387: hash: lcov: not found 00:29:37.601 07:42:31 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:37.601 07:42:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:37.601 07:42:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.601 07:42:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.601 07:42:31 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:37.601 07:42:31 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:37.601 07:42:31 -- paths/export.sh@4 -- $ export PATH 00:29:37.601 07:42:31 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:29:37.601 07:42:31 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:29:37.601 07:42:31 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:37.601 07:42:31 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715845351.XXXXXX 00:29:37.601 07:42:31 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715845351.XXXXXX.uhijxKze 00:29:37.601 07:42:31 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:37.601 07:42:31 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:29:37.601 07:42:31 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:29:37.601 07:42:31 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:37.602 07:42:31 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:37.602 07:42:31 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:37.602 07:42:31 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:29:37.602 07:42:31 -- common/autotest_common.sh@10 -- $ set +x 00:29:37.602 07:42:31 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:29:37.602 07:42:31 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:37.602 07:42:31 -- pm/common@17 -- $ local monitor 00:29:37.602 07:42:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:37.602 07:42:31 -- pm/common@25 -- $ sleep 1 00:29:37.859 07:42:31 -- pm/common@21 -- $ date +%s 00:29:37.859 07:42:31 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715845351 00:29:37.859 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715845351_collect-vmstat.pm.log 00:29:38.793 07:42:32 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:38.793 07:42:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:38.793 07:42:32 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:29:38.793 07:42:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:38.793 07:42:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:38.793 07:42:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:38.793 07:42:32 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:38.793 07:42:32 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:38.793 07:42:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:38.793 07:42:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:38.793 07:42:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:38.793 07:42:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:38.793 07:42:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:38.793 07:42:32 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:38.793 07:42:32 -- pm/common@44 -- $ pid=67962 00:29:38.793 07:42:32 -- pm/common@50 -- $ kill -TERM 67962 00:29:38.793 + [[ -n 1268 ]] 00:29:38.793 + sudo kill 1268 00:29:38.805 [Pipeline] } 00:29:38.827 [Pipeline] // timeout 00:29:38.833 [Pipeline] } 00:29:38.852 [Pipeline] // stage 00:29:38.858 [Pipeline] } 00:29:38.875 [Pipeline] // catchError 00:29:38.885 [Pipeline] stage 00:29:38.887 [Pipeline] { (Stop VM) 00:29:38.905 [Pipeline] sh 00:29:39.186 + vagrant halt 00:29:43.373 ==> default: Halting domain... 00:30:05.367 [Pipeline] sh 00:30:05.645 + vagrant destroy -f 00:30:09.832 ==> default: Removing domain... 00:30:09.847 [Pipeline] sh 00:30:10.127 + mv output /var/jenkins/workspace/freebsd-vg-autotest_3/output 00:30:10.138 [Pipeline] } 00:30:10.156 [Pipeline] // stage 00:30:10.162 [Pipeline] } 00:30:10.180 [Pipeline] // dir 00:30:10.188 [Pipeline] } 00:30:10.205 [Pipeline] // wrap 00:30:10.212 [Pipeline] } 00:30:10.230 [Pipeline] // catchError 00:30:10.240 [Pipeline] stage 00:30:10.242 [Pipeline] { (Epilogue) 00:30:10.259 [Pipeline] sh 00:30:10.538 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:10.549 [Pipeline] catchError 00:30:10.551 [Pipeline] { 00:30:10.561 [Pipeline] sh 00:30:10.834 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:11.091 Artifacts sizes are good 00:30:11.101 [Pipeline] } 00:30:11.119 [Pipeline] // catchError 00:30:11.131 [Pipeline] archiveArtifacts 00:30:11.138 Archiving artifacts 00:30:11.171 [Pipeline] cleanWs 00:30:11.182 [WS-CLEANUP] Deleting project workspace... 00:30:11.182 [WS-CLEANUP] Deferred wipeout is used... 00:30:11.188 [WS-CLEANUP] done 00:30:11.189 [Pipeline] } 00:30:11.205 [Pipeline] // stage 00:30:11.210 [Pipeline] } 00:30:11.227 [Pipeline] // node 00:30:11.233 [Pipeline] End of Pipeline 00:30:11.279 Finished: SUCCESS